Next Article in Journal
Environmental pH Evaluation in Exhibition Halls of Museo Nacional de Ciencias Naturales (CSIC, Madrid)
Previous Article in Journal
Discrete and Continuum Approaches for Modeling Solids Motion Inside a Rotating Drum at Different Regimes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SNR Analysis for Quantitative Comparison of Line Detection Methods

Department of Civil Engineering, Kyungpook National University, Daegu 41566, Korea
Appl. Sci. 2021, 11(21), 10088; https://doi.org/10.3390/app112110088
Submission received: 6 September 2021 / Revised: 18 October 2021 / Accepted: 25 October 2021 / Published: 28 October 2021

Abstract

:
The need for line detection in images is growing rapidly due to its importance in many image processing applications. The selection of an appropriate line detection method is essential for accurate detection of line pixels, but few studies provide an analytical basis for selecting a specific line detection method. In this study, to solve the problem, a method to analytically determine the signal-to-noise ratio (SNR) of line detection methods is proposed. Three line detection methods were selected for comparison: edge-detection (ED)-based, second derivative (SD)-based, and the sum of gradient angle differences (SGAD)-based line detection methods. Then, this study quantifies the SNR of the three line detectors through error propagation and signal noise coupling. In addition, the derived SNRs are graphically visualized to explicitly compare the performance of line detectors. Then, the quantified SNRs were validated by showing that they are highly correlated with the completeness and correctness observed in the experiment with a set of natural images. The experimental results show that the proposed SNR analysis can be used to select or design a suitable line detector.

1. Introduction

In the field of image processing, edges and lines are important features used to detect object shapes in a scene. In the literature, lines are classified as “ridge” or “valley” depending on their relative intensity compared to their neighbor intensities [1,2,3,4,5,6]. Lines are the primary features observed in various types of images and are used to detect and recognize the appearance of objects in an image. Recently, by developing various applications, the demand for efficient line extraction methods is rapidly increasing [7,8,9,10,11,12,13].
Frequently, edge detection (ED) is used to indirectly extract line features [7,8,9,10,11,12], and different operators have been proposed for ED [14,15,16,17,18]. However, ED-based methods are often inefficient for detecting lines because they require the detection of edge pixels on both sides of the lines.
Quality line extraction requires line detection with high signal-to-noise (SNR) values and high quality subpixel line localization and linking processes [5]. Moreover, although the criteria for establishing a method for detecting linear features in noisy images are always necessary, not many studies have compared the performances of line detection methods to help establish such criteria. Therefore, in this study, a quantitative comparison of the effectiveness of line detection methods, and their merits and demerits in detecting line features under varying conditions, has been described based on SNR analysis.
In this study, the influence of image smoothing on both the signal and noise strengths was first investigated. For the comparative study of line feature detectors, a second-order derivative (SD)-based method [19] was first selected and compared with an ED-based indirect method.
Moreover, the SD-based line detection method was shown to result in dislocalized line pixels under relatively large line widths [6]. A line detection method based on the addition of gradient angle differences (SGAD) was proposed to overcome the problems [6]. Evaluating line detectors based on experiments with a limited number of test images does not comprehensively demonstrate the performance of line detectors under varying conditions. Thus, the analytical quantification of the SNR of line detectors is required to determine their efficiency. A SNR analysis to indentify an optimal edge detector was used in [18]. However, a few studies have considered evaluating line detectors based on analytical quantification of their SNR. This primarily motivated the quantitative derivation of the SNRs of line detectors in this study.
A set of line detectors were compared based on their experimental characteristic curves in [20]. Moreover, multiple line detection methods were evaluated using multiple images in [21]. However, evaluating line detectors with a set of images does not comprehensively demonstrate the performance of them under varying conditions. In this study, quantitative derivation of the SNRs of line detectors overcomes the limitations of the existing evaluation methods. The contributions of this study are listed below:
  • Streamlined derivation of the SNRs of line detectors including the derivation of correlations among noises after smoothing; derivations of signal and noise strengths of elementary operators; combination of SNRs for line detectors; and application of a penalty function for considering the influence of blurring, smoothing, and line width on line detectors.
  • Analytical quantification of the SNRs of line detectors using error propagation and visualization of the quantity of the derived values including signal strengths, noise strengths, and SNRs.
  • Verification of the validity of the SNRs derived for the SD- and SGAD-based line detectors by investigating the relationship of the SNRs with completeness and correctness of their line detection results.
The rest of this study is organized as follows. Section 2 describes the related studies for line detection. Section 3 compares the performance of the ED-based and SD-based line detections based on SNR analysis. Section 4 describes the SNR of the SGAD-based line detection method. The results obtained using real images are described in Section 5 and are followed by the conclusion in Section 6.

2. Related Work

Pixels in an image were classified into edges, ridges, valleys, and other geometric features based on the analysis of the coefficients resulting from fitting a Legendre polynomial of 2 degrees or less with local intensity in [22]. However, this approach has a nontrivial problem with line detection because it requires multiple steps to detect line pixels and a set of well-tuned thresholds. In another study, ridges and valleys were detected by finding zero crossings of the first-order directional derivatives in [1]. The method was based on 10 coefficients resulting from fitting a bivariate cubic polynomial to local intensities. One of the limitations of this method is that it may produce nontrivial ringing artifacts, and the approach requires a large window size for the 10 coefficients of the bivariate cubic polynomial to be calculated. A Zernike moment-based edge and line detection method was proposed in [23], which was a parameter-based edge and line detection method. Differential geometry was used to detect the line pixels in [2,3]. One of the limitations with this approach is that it requires a high processing time to calculate the first-, second-, and third-order derivatives. A line detection approach based on second-order derivatives was proposed in [19,24]. In a similar study, this SD-based approach was used to detect line pixels in aerial images and synthetic aperture radar (SAR) images [25,26]. However, as it will be shown in the following sections, the SD-based line detection methods have limitations under relatively large line widths. A multi-scale-based line detection approach was proposed in [24]. This approach may, however, face limitations when line features are located close to other non-homogeneous features. These limitations are attributed to the blending of neighboring features at a larger scale.
A simple line detection method using relational operations was proposed in [4]. Although its computational cost is low, one of the limitations of this approach is that selecting an appropriate window size depends on the width of a line to be detected. However, multi-scale anisotropic Gaussian kernels were used for line detection in [27]. This approach has a problem of high computational cost. Moreover, a multi-step-based line detection approach was proposed in [28], and its computation is cost-intensive because of the convolution of a set of filters and a sequence of steps. Recently, a convolutional neural network-based approach was proposed to detect the boundaries among walls, floors, and ceilings in [29]. However, this approach requires a well-trained network to implement boundary detection and detects only predefined types of boundary pixels such as wall–wall, wall–ceiling, and wall–floor.
Furthermore, a creaseness measure was used to detect line pixels in [21]. One of the limitations of this approach is that a simple sum of gradient differences may lose the direction information in detecting line pixels. The SGAD-based line detection approach resolved this limitation using the addition of absolute values of gradient angle differences [6].

3. Method—SNR of ED and SD for Line Detection

It is necessary to select between edge and line detectors when extracting linear features from an image. To make the right decision, it is necessary to know their performance under various conditions such as line width, noise level, and smoothing factor for noise suppression. Although this information is important for making the decision, presumably, very few studies have focused on this issue. To compensate for these limitations, in this section, the performances of ED and SD are compared by investigating their respective SNRs under various conditions.

3.1. Line Model and Derivation of Its Derivatives

In camera-captured images, there is always a certain amount of blur and noise. An image I can be modeled by signals F convoluted with a certain amount of blur b and noise n , which is described in various approaches [30,31,32,33,34,35,36,37,38] as follows:
I = F b + n ,
where ∗ indicates the convolution operator.
Variations of blurring patterns around edges because of varying contrast have been analyzed by modeling edge profiles with two blur parameters in [39]. For simplicity, however, the imaging model with one blur parameter in Equation (1) is used in the following derivation. In this study, the line profile has been modeled by two factors, namely, width, w, and contrast, k, as shown in Figure 1. Then, according to Figure 1, the 1D line signal can be mathematically modeled as follows:
F ( x ) = h + k , if | x L | < w 2 ; h , otherwise
where h, L, and x are the background intensity, coordinate of the center of the line, and coordinate of an arbitrary location, respectively. L and x are the coordinates with reference to the line normal direction.
To consider the blurring effect in the image formation process, a 1D Gaussian blur function is introduced as follows:
b ( t ) = 1 2 π σ b e t 2 2 σ b 2
where σ b and t are the blurring factor and distance from the center of the Gaussian function, respectively. The dashed line in Figure 2 shows a scaled version of the blur function, k · exp ( ( L x ) 2 2 σ b 2 ) with variation of the location, x. Thus, from Figure 2, the intensity value at x captured by a camera can be derived as follows:
F b ( x ) = ( F b ) ( x ) = x w 2 L x + w 2 L k σ b 2 π e t 2 2 σ b 2 d t .
For noise removal, image smoothing is usually applied by convoluting an image with a 2D Gaussian function, which can be described as follows:
s ( u , v ) = 1 2 π σ s 2 e u 2 + v 2 2 σ s 2 ,
where σ s is the smoothing factor, and u and v are the distances from the center of the Gaussian function in the column and row directions, respectively. Accordingly, its 1D function can be written as follows:
s ( x ) = 1 2 π σ s e x 2 2 σ s 2
Then, the application of the smoothing function s generates the image I s , which can be written as follows:
I s = I s = ( F b + n ) s = F b s + n s .
Thus, after blurring and smoothing, the line signal in Equation (2) is reformed as per [40] as follows:
F b s ( x ) = ( F b s ) ( x ) = ( F b s ) ( x ) = x w 2 L x + w 2 L k 2 π ( σ b 2 + σ s 2 ) e t 2 2 ( σ b 2 + σ s 2 ) d t .
Thus, the first derivative of the cross section profile of the smoothed line with respect to x is derived from Equation (8) as follows:
F b s ( x ) = d F b s d x = k 2 π ( σ b 2 + σ s 2 ) [ e ( x + w 2 L ) 2 2 ( σ b 2 + σ s 2 ) e ( x w 2 L ) 2 2 ( σ b 2 + σ s 2 ) ] .
Additionally, the second derivative of the cross section profile of the smoothed line with respect to x is derived from Equation (9) as follows:
F b s ( x ) = d 2 F b s d x 2 = k 2 π ( σ b 2 + σ s 2 ) 3 2 [ e ( x + w 2 L ) 2 2 ( σ b 2 + σ s 2 ) ( x + w 2 L ) e ( x w 2 L ) 2 2 ( σ b 2 + σ s 2 ) ( x w 2 L ) ] .

3.2. Measure of Signal Strengths

Edge strengths are measured at the boundaries of smoothed line model in terms of the values of first and second derivatives. The first derivative indicates the gradient of the smoothed edge profile and is derived by substituting x = L w 2 in Equation (9) as follows:
F b s | x = L w 2 = k 2 π ( σ b 2 + σ s 2 ) [ 1 e w 2 2 ( σ b 2 + σ s 2 ) ] .
To distinctly determine an edge pixel in the local areas in an image in a non-maxima suppression process, the absolute values of the gradients at its neighboring pixels should be sufficiently lower than the absolute values at the edge location, x = L w 2 . Thus, the absolute value of the second derivatives of the neighborhood pixels should be high. Therefore, the neighborhood pixel, which is one pixel away from the edge location, is seleted. Moreover, the second derivative at the pixel location x = L w 2 1 is derived as another measure of edge strength from Equation (10) as follows:
F b s | x = L w 2 1 = k 2 π ( σ b 2 + σ s 2 ) 3 2 [ e 1 2 ( σ b 2 + σ s 2 ) e ( 1 + w ) 2 2 ( σ b 2 + σ s 2 ) ( 1 + w ) ] .
Alternatively, to distinctly determine a line pixel in a non-maxima suppression process, the absolute values of the first derivatives or the gradients of the neighborhood pixel of the line pixel should be high. Therefore, as one of the neighborhood pixels, a pixel located at x = L 1 is selected. The first derivative of this pixel, as a measure of line strength, is derived from Equation (9) as follows:
F b s | x = L 1 = k 2 π ( σ b 2 + σ s 2 ) [ e ( 1 w 2 ) 2 2 ( σ b 2 + σ s 2 ) e ( 1 + w 2 ) 2 2 ( σ b 2 + σ s 2 ) ] .
Moreover, for detecting a line pixel, its second derivative at the line location x = L must be high, which can be derived from Equation (10) as follows:
F b s | x = L = k w 2 π ( σ b 2 + σ s 2 ) 3 2 e w 2 8 ( σ b 2 + σ s 2 ) .

3.3. Measure of Noise Strengths

To measure the SNR for line detection in a smoothed image, it is necessary to quantify the amount of noise after smoothing and the signal strengths. However, there have been few studies on the quantification of the amount of noise after smoothing. Thus, in the following section, the residual noise after the smoothing is first quantified, and the correlation between the pixels in an image of smoothed noises is derived using an error propagation scheme.

3.3.1. Correlation of Noises in Smoothed Images

First, the amount of noise at an arbitrary location (r,c) in a noise image is denoted by n as follows:
n = n ( r , c ) .
Noise n is assumed to be symmetrically distributed around zero, and its expectation E { n } satisfies the following equation:
E { n } = 0 .
Then, the dispersion or variance of the noise at a single pixel σ n 2 is defined as follows:
σ n 2 = D { n } = E { n E { n } 2 } = E { n 2 } .
By extrapolating the expectation and dispersion of the noise to the noises of all the pixels in an image, the expectation and dispersion of the vector containing all noises can be written as follows:
E { vec ( n ) } = 0 , D { vec ( n ) } = σ n 2 I c .
where vec ( · ) is an operation converting a matrix into a column vector and I c is the continuous version of the identity matrix defined as I c ( a , b ) = 1 if a = b , else I c ( a , b ) = 0 .
The noise amount remaining after smoothing at an arbitrary location (r,c) can be modeled as follows:
( n s ) ( r , c ) = n ( r u , c v ) s ( u , v ) d u d v .
Furthermore, the expectation of the vectors of the smoothed noises is derived as follows:
E { vec ( n s ) } = vec ( E { n } s ) = 0 .
Moreover, the dispersion of the vectors of the smoothed noises is derived as follows:
D { vec ( n s ) } = E { vec ( n s ) E { vec ( n s ) } vec ( n s ) E { vec ( n s ) } T } = E { vec ( n s ) vec ( n s ) T }
To quantify the correlation between the noises remaining after the smoothing, the covariance between them is first derived for two arbitrary pixels at (r,c) and ( r α , c β ) as follows:
σ n s | ( α , β ) = cov { ( n s ) | ( r , c ) , ( n s ) | ( r α , c β ) } = E { n s | ( r , c ) n s | ( r α , c β ) } = σ n 2 1 2 π σ s 2 e ξ 2 + η 2 2 σ s 2 1 2 π σ s 2 e ( ξ 2 α ) 2 + ( η 2 β ) 2 2 σ s 2 d ξ d η = σ n 2 4 π 2 σ s 4 e ξ 2 + η 2 + ( ξ α ) 2 + ( η β ) 2 2 σ s 2 d ξ d η = σ n 2 4 π 2 σ s 4 · e α 2 + β 2 4 σ s 2 · π σ s 2 1 2 π ( σ s 2 ) 2 · e ( ξ α 2 ) 2 + ( η β 2 ) 2 2 ( σ s 2 ) 2 d ξ d η = σ n 2 4 π σ s 2 e α 2 + β 2 4 σ s 2 .
If we let d be the distance between the two positions (r,c) and ( r α , c β ), then d is calculated as follows:
d = ( α 2 + β 2 ) 1 2 .
Now, according to Equation (22), the covariance between the two pixels can be calculated as follows:
σ n s | d = σ n 2 4 π σ s 2 e d 2 4 σ s 2 .
Then, the dispersion D { vec ( n s ) } , which is the auto-covariance of the remaining noise at (r,c), is derived from Equation (24) as follows:
σ n s 2 = σ n s | d = 0 = σ n 2 4 π σ s 2 .
Therefore, the correlation among noises remaining after the convolution with a smoothing function s for two arbitrary pixels separated by a distance d is derived from Equations (24) and (25) as follows:
ρ d = σ n s | d σ n s 2 = e d 2 4 σ s 2 .

3.3.2. Measure of Noise in Derivatives

To extract edges or lines, the calculation of the first and second derivatives is typically performed after smoothing. While calculating the first and second derivatives, the post-smoothing residual noise propagates, which should be quantified for computing the SNR of ED-based and SD-based line detections. Calculating the first and second derivatives in an image is implemented by applying a convolution of certain kernels to the image. In this study, the kernels with size 3 × 3 are used for the implementation. Accordingly, to investigate the propagation of noise, while calculating the first and second derivatives, it is necessary to consider the smoothed noises in the 3 × 3 neighborhood at each pixel as follows:
( n s ) 3 × 3 = ( n s ) 1 ( n s ) 2 ( n s ) 3 ( n s ) 4 ( n s ) 5 ( n s ) 6 ( n s ) 7 ( n s ) 8 ( n s ) 9 ,
where i in ( n s ) i is the unique sequential number assigned to each pixel within a 3 × 3 kernel. After rearranging the pixels in a vector ordered by their sequential numbers, their correlations are calculated by Equation (26) by considering their distance, and transformed into a correlation matrix, R ( n s ) 3 × 3 as follows:
R ( n s ) 3 × 3    = 1 ρ 1 ρ 2 ρ 1 ρ 2 ρ 5 ρ 2 ρ 5 ρ 2 2 1 ρ 1 ρ 2 ρ 1 ρ 2 ρ 5 ρ 2 ρ 5 1 ρ 5 ρ 2 ρ 1 ρ 2 2 ρ 5 ρ 2 1 ρ 1 ρ 2 ρ 1 ρ 2 ρ 5 1 ρ 1 ρ 2 ρ 1 ρ 2 1 ρ 5 ρ 2 ρ 1 1 ρ 1 ρ 2 Sym. 1 ρ 1 1 .
To calculate the first derivative in the column direction, this study used a scaled Sobel operator such that it should produce the gradient for one pixel unit as follows:
D c = 1 8 1 0 1 2 0 2 1 0 1 .
The kernel in Equation (29) can be rewritten in its vector form as follows:
V D c = vec D c T .
Then, the dispersion σ n s D c 2 of the noise resulting from the convolution of the smoothed noise with the kernel in Equation (29) can be quantitatively derived as follows:
σ n s D c 2 = σ n s 2 V D c T R ( n s ) 3 × 3 V D c = σ n 2 64 π σ s 2 ( 3 + 4 ρ 1 2 ρ 2 4 ρ 5 ρ 2 2 ) .
Moreover, to calculate the second derivative in the column direction, this study used the following kernel such that it should produce the second derivative for one pixel unit, defined as follows:
D cc = 1 4 1 2 1 2 4 2 1 2 1 .
The kernel in Equation (32) can be represented in the vector form as follows:
V D cc = vec D cc T .
Then, the dispersion σ n s D cc 2 of the noise resulting from the convolution of the smoothed noise with the kernel in Equation (32) can be quantitatively derived as follows:
σ n s D cc 2 = σ n s 2 V D cc T R ( n s ) 3 × 3 V D cc = σ n 2 16 π σ s 2 ( 9 16 ρ 2 + 6 ρ 2 + ρ 2 2 ) .

3.4. Derivation of SNRs

In the following section, the SNRs of ED-based and SD-based line detections are derived using the signal strengths quantified in Section 3.2 and the amount of noise resulting from the smoothing and convolution of specific kernels described in Section 3.3. To make the calculation simple, the normal direction of lines to be detected is assumed to be aligned in the column direction in the following derivations.
To detect edge pixels at the boundaries of a line, its SNR for the first derivative in the normal direction is derived from Equations (11) and (31) as follows:
SNR edge D c = F b s | x = L w 2 σ n s D c .
Similarly, the edge SNR for the second derivative in the normal direction can be derived from Equations (12) and (34). However, in this study, the second derivative of the edge profile model is assumed to be positive at x = L w 2 1 . Thus, the max(·) operation is applied to suppress the negative signals and the edge SNR for the second derivative in the normal direction is derived as follows:
SNR edge D cc = max ( F b s | x = L w 2 1 , 0 ) σ n s D cc .
Given that the edge detection is performed based on combining the first and second derivatives in this study, the SNRs for both the derivatives are combined into the measure SNR C ( edge ) as follows:
SNR C ( edge ) = SNR edge D c · SNR edge D cc .
The SNR of the SD-based line detection for the first derivative in the normal direction can be derived from Equations (13) and (31) as follows:
SNR line D c = F b s | x = L 1 σ n s D c .
The SNR of SD-based line detection for the second derivative in the normal direction can be derived from Equations (14) and (34) as follows:
SNR line D cc = F b s | x = L σ n s D cc .
Then, because the SD-based line detection is considered to be performed based on combining the first and second derivatives in this study, the SNRs of both the derivatives are combined into the measure SNR C ( line SD ) as follows:
SNR C line SD = SNR line D c · SNR line D cc ,
where the subscript SD in SNRC(line SD ) represents the SNR measure for the SD-based line detection so that it can be distinguished from that of the SGAD-based line detection.
With the increase in sizes of blurring and smoothing, the signals tend to mix with their neighborhood signals and degenerate. Based on this rationale, in this study, a penalty function is introduced to measure the degeneration of signals with the increase in the sizes of blurring and smoothing. However, as line width increases, the degeneration of signal owing to blurring and smoothing decreases. Accordingly, the penalty function is modeled as follows:
p σ b , σ s , w = w λ σ b 2 + σ s 2 1 ,
where λ is the power factor applied to line width, w, so that it can reflect the decrease of degeneration effect of signal strength because of blurring and smoothing. From a set of brief experiments, the value of λ was set to 0.3 in this study. Thus, the SNR of edge detection with the penalty for blurring and smoothing, i.e., SNRPS(edge), is calculated as follows:
SNR PS ( edge ) = p σ b , σ s , w · SNR C ( edge ) .
Moreover, the SNR of the SD-based line detection with the penalty for blurring and smoothing, i.e., SNRPS(line SD ), is calculated as follows:
SNR PS ( line SD ) = p σ b , σ s , w · SNR C ( line SD ) .
Because the detection of edge pixels is required at either sides of the line to find the line pixel, the SNRPS of edge detection for the purpose of line detection is measured by halving SNRPS(edge) in Equation (42) as follows:
SNR PS ( line ED ) = SNR PS ( edge ) 2 .
To compare the performance of the ED-based and SD-based line detections, a set of graphical investigations was used. The tests were performed with various values of the smoothing factor σ s and the line width w for investigation. The smoothing factor was set to vary from 0.4 to 10.0 pixels with an interval of 0.1 pixels, and the line width was set to vary from 0.5 to 20.0 pixels with an interval of 0.1 pixels. The blurring factor σ b was set to 1.0, which is a reasonable value to represent the amount of blurs observed in many camera-captured images [39]. The standard deviation of noise and contrast were set to σ n = 0.05 and k = 1.0 , respectively, in the following graphical investigations.
Figure 3a shows the SNRC of edge detection. In Figure 3a, the SNR becomes very high, when the smoothing factor is set to be large and distinct, particularly when the line width is large. However, it is not realistic because the signals become interspersed with other signals with increase in the smoothing factor. Thus, according to Equation (42), a realistic measure of SNR is obtained by applying the penalty function to SNRC(edge). The resulting SNRPS is shown in Figure 3b.
Figure 4a shows the SNRC(line SD ). Similarly, in Figure 4a, the SNR becomes extremely high when the smoothing factor is set to be large, and a realistic measure of SNR is obtained by applying the penalty function to SNRC(line SD ) according to Equation (43). The resulting SNRPS(line SD ) is shown in Figure 4b.
The SNRPS(line ED ) is shown in Figure 5a, which is calculated using Equation (44). Then, Figure 5b shows the difference in SNRPS(line SD )−SNRPS(line ED ). According to Figure 5b, when the line width is relatively small, the SD-based line detection in terms of SNR is shown to be more effective than ED-based line detection. For example, the SD-based line detection is more effective than ED-based line detection when the line width w is less than 5 pixels with a smoothing factor of 1.0, and less than 11 pixels when a smoothing factor of 3.0 is applied to an image. Moreover, the highest values in SNRPS(line SD )−SNRPS(line ED ) against the varying line widths are observed when a smoothing factor within the range from 1.0 to 2.0 is applied to an image with the line width within the range of 4 pixels.

4. Method—SNR of SGAD for Line Detection

In this section, a line detection method, based on the sum of gradient angle differences [6], is introduced. Then, its performance is investigated based on its SNR derivations. Subsequently, the SGAD-based line detection can be compared with the SD-based line detection in terms of their SNRs.

4.1. Definition of SGAD

According to the work in [6], the gradient angle at each pixel can be derived as follows:
θ i = tan 1 g r i g c i ,
where g r i and g c i are the gradients at pixel i in the row and column directions, respectively.
Then, the gradient angle difference (GAD) of two pixels is defined as the minimum positive angle between their gradient vectors as follows:
δ i , j = min abs θ i θ j , 2 π abs θ i θ j
The angle difference is calculated for the pairs of gradient vectors (Figure 6). Then, the measure, SGAD is derived by adding the gradient angle differences as follows:
SGAD = δ 1 , 8 + δ 2 , 7 + δ 3 , 6 + δ 4 , 5 .

4.2. Dispersion of Gradient Angle Differences

The dispersion of the single-angle difference between the ith and the jth pixels, D { θ i θ j } is calculated as follows:
D { θ i θ j } = 1 1 D θ i θ j 1 1 .
The dispersion of the vector of the angles in Equation (48) is derived as follows:
D θ i θ j = J θ ( i , j ) D g c i g r i g c j g r j J θ ( i , j ) T ,
where
J θ ( i , j ) = θ i g c i θ i g r i 0 0 0 0 θ j g c j θ j g r j , θ g c i = g r i g c i 2 + g r i 2 , θ g r i = g c i g c i 2 + g r i 2 .
Then, the dispersion of the vector of the gradients in Equation (49) is derived as follows:
D g c i g r i g c j g r j = J g ( i , j ) σ n s 2 R i , j J g ( i , j ) T .
The value of the gradient in the column direction at the ith pixel is calculated by convoluting the smoothed image with the kernel D c as follows:
g c i = I s D c i = F b s D c i + n s D c i
To calculate the gradients in the row direction, a kernel is defined as follows:
D r = 1 8 1 2 1 0 0 0 1 2 1 .
Then, the value of the gradient in the row direction at the ith pixel is calculated using the convolution of the smoothed image with the kernel D r as follows:
g r i = I s D r i = F b s D r i + n s D r i .
Figure 7 shows the windows for calculating the gradients, where the numbers within the pixels represent the pixel indices used to indicate the pixel locations in the Jacobian matrices below. In Figure 7a, the center pixel is located at 8 and the gradient angle difference is calculated for the pixels located at 5 and 11. In Figure 7b, the center pixel is located at 9 and the gradient angle difference is calculated for the pixels located at 5 and 13.
Then, the Jacobian matrix J g ( i , j ) in Equation (50) when i = 4 and j = 5 is derived using D c and D r . Moreover, the Jacobian matrix, J g ( i , j ) in Equation (50) when i = 1 and j = 8 is derived using D c and D r . The correlation matrix R i , j in Equation (50) when i = 4 and j = 5 is derived as per Figure 7a and Equation (26). The correlation matrix R i , j in Equation (50) when i = 1 and j = 8 is derived as per Figure 7b and Equation (26).
Thus, the dispersion of the vector of the gradients in Equation (50) when i = 4 and j = 5 is derived as follows:
D g c 4 g r 4 g c 5 g r 5 = σ n s 2 64 P 1 0 P 2 0 0 P 1 0 P 3 P 2 0 P 1 0 0 P 3 0 P 1 ,
where
P 1 = 12 + 16 ρ 1 8 ρ 2 16 ρ 5 4 ρ 2 2 , P 2 = 6 8 ρ 1 + 10 ρ 2 + 16 ρ 5 + 4 ρ 2 2 6 ρ 4 8 ρ 17 2 ρ 2 5 , P 3 = 2 + 8 ρ 1 + 10 ρ 2 8 ρ 5 12 ρ 2 2 + 8 ρ 3 8 ρ 13 + 2 ρ 4 2 ρ 2 5 .
Furthermore, the dispersion of the vector of the gradients in Equation (50) when i = 1 and j = 8 is derived as follows:
D g c 1 g r 1 g c 8 g r 8 = σ n s 2 64 Q 1 0 Q 2 Q 3 0 Q 1 Q 3 Q 2 Q 2 Q 3 Q 1 0 Q 3 Q 2 0 Q 1 ,
where
Q 1 = 12 + 16 ρ 1 8 ρ 2 16 ρ 5 4 ρ 2 2 , Q 2 = 1 4 ρ 1 4 ρ 2 + 8 ρ 5 + 12 ρ 2 2 4 ρ 3 + 8 ρ 13 2 ρ 4 4 ρ 13 4 ρ 2 5 4 ρ 5 ρ 4 2 , Q 3 = 1 4 ρ 1 4 ρ 2 + 4 ρ 3 + 8 ρ 10 + 2 ρ 4 + 4 ρ 17 4 ρ 3 2 4 ρ 5 ρ 4 2 .
Thus, the dispersion of the angle difference in Equation (48), D { θ i θ j } , when i = 4 and j = 5 is derived as follows: -4.6cm0cm
σ θ 4 θ 5 2 = D { θ 4 θ 5 } = σ n 2 256 π σ s 2 P 1 g c 4 2 + g r 4 2 + P 1 g c 5 2 + g r 5 2 2 · g r 4 g r 5 P 2 + g c 4 g c 5 P 3 g c 4 2 + g r 4 2 g c 5 2 + g r 5 2 .
Moreover, the dispersion of the angle difference in Equation (48), D { θ i θ j } , when i = 1 and j = 8 is derived as follows:
σ θ 1 θ 8 2 = D { θ 1 θ 8 } = σ n 2 256 π σ s 2 [ Q 1 g c 1 2 + g r 1 2 + Q 1 g c 8 2 + g r 8 2 2 Q 2 · g r 1 g r 8 + g c 4 g c 5 g c 1 2 + g r 1 2 g c 8 2 + g r 8 2 + 2 Q 3 · g r 1 g c 8 + g c 1 g r 8 g c 1 2 + g r 1 2 g c 8 2 + g r 8 2 ] .

4.3. Derivation of SNRs

To effectively compare the performance of SGAD-based line detection with that of the SD-based line detection, a line with width w and aligned along the direction of row was used. Then, the line was assumed to have the following gradient values at pixels i and j, which were located one pixel to the left and right, respectively, of the center of the line and can be expressed as follows:
g c i = g , g r i = 0 , g c j = g , and g r j = 0 ,
where
g = F b s | x = L 1 .
Then, at the center of the line, the standard deviation of the gradient angle difference in the column direction, σ θ 4 θ 5 , was derived from Equations (56) and (58) as follows:
σ θ 4 θ 5 = σ n P 1 + P 3 8 2 π σ s g .
Moreover, at the center of the line, the standard deviation of the gradient angle difference in the lower-right diagonal direction, σ θ 1 θ 8 , was derived from Equations (57) and (58) as follows:
σ θ 1 θ 8 = σ n Q 1 + Q 2 8 2 π σ s g .
For the line model assumed in the first paragraph of this section, the strength of the line signal based on the angle difference was π . Thus, the SNR of the angle difference in the column direction can be derived as follows:
SNR θ 4 θ 5 = π σ θ 4 θ 5 = 8 π 2 π σ s σ n P 1 + P 3 g .
Moreover, the SNR of the angle difference in the lower-right diagonal direction can be derived as follows:
SNR θ 1 θ 8 = π σ θ 1 θ 8 = 8 π 2 π σ s σ n Q 1 + Q 2 g
For the SGAD-based line detection, the performance was measured by combining both the gradient angle difference and SNR of the first derivative. Thus, the SNR combined for the line detection using the gradients at the fourth and fifth pixels in Figure 6 can be modeled as follows:
SNR C line θ 4 θ 5 = SNR line D c · SNR θ 4 θ 5 .
Moreover, the SNR combined for the line detection using the gradients at the first and eighth pixels in Figure 6 can be modeled as follows:
SNR C line θ 1 θ 8 = SNR line D c · SNR θ 1 θ 8 .
Considering the penalty for blurring, smoothing, and line width in Equation (41), the SNR C ( line θ 4 θ 5 ) in Equation (63) can be derived into SNR PS ( line θ 4 θ 5 ) as follows:
SNR PS ( line θ 4 θ 5 ) = p σ b , σ s , w · SNR C ( line θ 4 θ 5 ) .
Similarly, the SNR C ( line θ 1 θ 8 ) in Equation (64) can be derived into SNR PS ( line θ 1 θ 8 ) as follows:
SNR PS ( line θ 1 θ 8 ) = p σ b , σ s , w · SNR C ( line θ 1 θ 8 ) .
Then, the SNRPS of the sum of the gradient angle differences for all of the four pairs in Figure 6 can be modeled as follows:
SNR PS ( line SGAD ) = SNR PS ( line θ 1 θ 8 ) + SNR PS ( line θ 2 θ 7 ) + SNR PS ( line θ 3 θ 6 ) + SNR PS ( line θ 4 θ 5 ) .
Because of the symmetry between the pair of gradients at 1 and 8, and the pair of gradients at 3 and 6 in Figure 6, SNR PS ( line θ 3 θ 6 ) can be written as follows:
SNR PS ( line θ 3 θ 6 ) = SNR PS ( line θ 1 θ 8 ) .
Because the gradients at 2 and 7 in Figure 6 are zero for the assumed line model, SNR PS ( line θ 2 θ 7 ) can be written as follows:
SNR PS ( line θ 2 θ 7 ) = 0 .
Furthermore, considering the SNRPS of the four directions, the SNRPS of SGAD for detecting the assumed line model can be derived as follows:
SNR PS ( line SGAD ) = SNR PS ( line θ 4 θ 5 ) + 2 · SNR PS ( line θ 1 θ 8 ) .
The ratio of SNR PS ( line SGAD ) to SNR PS ( line SD ) can be derived from Equations (43) and (70) as follows:
ratio ( SGAD , SD ) = SNR PS ( line θ 4 θ 5 ) + 2 · SNR PS ( line θ 1 θ 8 ) SNR PS ( line SD ) .
Furthermore, Equation (71) can be reduced from Equations (43), (65) and (66) as follows:
ratio ( SGAD , SD ) = SNR θ 4 θ 5 + 2 SNR θ 1 θ 8 SNR line D cc .
Let
f = F b s | x = L
and
R = 9 16 ρ 2 + 6 ρ 2 + ρ 2 2 .
Then, ratio ( SGAD , SD ) can be derived from Equations (14), (39), (61), (62), (72), (73) and (74) as follows:
ratio ( SGAD , SD ) = 2 2 π g f 1 2 R 1 4 ( P 1 + P 3 ) 1 4 + 2 ( Q 1 + Q 2 ) 1 4 .
In Equation (75), g f is derived from Equations (13) and (14) as follows:
g f = σ b 2 + σ s 2 w [ e w 1 2 ( σ b 2 + σ s 2 ) e w 1 2 ( σ b 2 + σ s 2 ) ] .
As shown in Equation (76), ratio(SGAD,SD) changes with the variations of σ b , σ s , and w, but does not change with the variation of σ n .
To investigate the performance of the SGAD-based line detection, the graphical plots of SNR are used as shown below. The values of SNR used are generated under the same conditions as applied in Section 3.4.
Figure 8 shows the difference, SNRPS(lineSGAD)− SNRPS(lineSD). According to Figure 8, the SGAD-based line detection has higher SNR than the SD-based line detection under varying conditions of the line width and the smoothing factor. When the smoothing factor of 1.0 is applied, the advantage of the SGAD-based line detection becomes distinct for a line width less than 8 pixels.

5. Experimental Results and Discussion

To validate the derived SNRs in natural images, the relationships of the SNRs with the completeness and correctness were also used in this section. For the experiments with natural images, the Lena image with the size of 512 × 512 pixels in gray color [0, 255] was first selected to describe the procedure of the experiments for natural images. Figure 9 shows the Lena image with three annotated regions for further investigation.
Figure 10 shows the images resulting from each process for the annotated regions in Figure 9. To measure the SNRs, the completeness and correctness, ridge, and valley pixels were detected in the smoothed version of the original image with σ s = 1.0 by the SGAD-based line detection and considered as the ground truth pixels of ridge and valley pixels. Then, the line width and contrast of each ground truth pixel were calculated as follows. At each ground truth pixel, line width was searched for two directions: the normal and its opposite directions. Regarding ridge pixels, for each direction, line width in one direction increased by one pixel at every extension of the line width before the intensity of the current pixel is greater than that of its previous pixel + a specified tolerance. In this study, the tolerance was set to 0.05. Then, the direction among the two search directions, whose last intensity has less contrast with the ground truth line pixel, was selected and the line width in that direction was recorded as the line width of the ground truth line pixels. The contrast in the selected direction was recorded as the contrast of the ground truth line pixels. The method to find the line width and contrast of valley pixels was the same to that of the ridge pixels. SNRs of the SD-based and SGAD-based line detections were calculated using the given line width, contrast, and noise strength. As shown in Figure 10, SNRs of the SGAD-based line detection were much greater than those of the SD-based line detection. Accordingly, as shown in the last two columns in Figure 10, the results by SGAD show less noisy and more accurate line detection results than those by SD when they are compared with the ground truths shown in the third column in Figure 10.
The completeness and correctness of line detection results were measured as follows. First, ground truth images were generated using the line detection results by applying the SGAD-based line detection to the smoothed version of the original image with σ s = 1.0 , as shown in the third column in Figure 10. Then, a set of noisy images were generated by adding varying strengths of Gaussian random noises ranging from σ n = 4.0 to 20.0 with the interval of 4.0, respectively, to the original images. Next, all the noisy images were denoised by applying a smoothing convolution with σ s = 1.0 , and the SD-based and SGAD-based line detections were applied to the noisy images. When the ground truth image as reference is denoted by R and a line pixel image resulting from the application of a line detection method to a noisy image are denoted by T , then the image C which contains line pixels correctly detected by a line detector can be written as
C = R T ,
where ∧ denotes the logical AND operator.
To find the incorrectly detected line pixels within a specified distance from the reference pixels, the reference line pixel image was dilated with a certain radius r as
R d = R S r ,
where ⊕ denotes the dilation operator and S r the structural element with radius r. In this study, r was set to 3 pixels because it equals 3 σ s when σ s = 1.0 .
Then, the image containing incorrectly detected pixels Q was generated as
Q = R d ( T ¬ C ) ,
where ¬ denotes the logical NOT operator.
Next, the number of incorrectly detected pixels were counted at each ground truth line pixel within a certain distance d t in its line normal and opposite to the directions using the images R and Q and recorded into an image V . In this study, d t was set to 3 pixels, which is the same as the value of r used for dilation of a reference image.
Subsequently, the SNRs at ground truth line pixels were summarized into a histogram with a certain bin size. The bin size was determined by dividing the maximum of SNRs in each SNR image by a specified number n. In this study, the number n was set to 100. Then, new bins were generated in a descending order so that each new bin has about a certain percentage p b of the total ground truth line pixels. The value of p b was set to 10 percent in this study. Figure 11 shows the results for the SNRs of the ridge pixels in the SD-based line detection, when σ n = 8.0 .
Next, pixels having SNRs within the boundaries of each SNR bin were selected and the mean of their SNRs was recorded. Moreover, among the selected pixels, the total number of correctly detected pixels N c was counted using the image C derived from Equation (77). Then, the completeness corresponding to the mean SNR was measured by dividing the number of correctly detected line pixels by the total number of the ground truth line pixels N g t as shown here:
Completeness = N c N g t × 100 .
Additionally, among the selected pixels, the total number of incorrectly detected pixels N i c was counted using the image Q derived from Equation (79). Then, the correctness corresponding to the mean SNR was measured as
Correctness = N c N g t + N i c × 100 .
Figure 12 shows the observed relationships of SNRs with completeness and correctness for the Lena image under varying noise strengths. As shown in the figure, the trend of their relationships is manifest under varying noise strengths. More so, the SGAD-based line detection showed higher SNRs than the SD-based line detection and higher completeness and correctness under all the tested noise strengths.
Figure 13 shows all the plots together which are shown in Figure 12. As shown in Figure 13, the validity of the SNRs derived for the SD-based and the SGAD-based line detections was proven by the overall strong and consistent relationships of SNRs with completeness and correctness under varying noise strength.
The experiments were also applied to other eight natural images as shown in Figure 14. Before implementation of line detections, the natural images were converted into grayscale images using the MATLAB’s rgb2gray function. As shown in Figure 15, experiments with the images show that there exist distinct and strong relationships of SNRs with completeness and correctness. The pattern of variations of completeness against SNR was observed similar to that of the correctness. Although the content differs depending on the test images, it was found that the line detection completeness and correctness against SNRs showed similar patterns throughout the test images. This observation indicates the consistency of SNR measures with respect to completeness and correctness under various line conditions. However, as shown in Figure 15, the SD-based line detection was observed to produce low completeness and correctness as compared to its SNR when SNR > 3.0 . This is caused by frequent bifurcations of the SD-based line detection. When the line width was not relatively large, but the contrast was relatively large, the calculated value of SNR was relatively large. Meanwhile, in the experiments, the shape of many line profiles with such conditions deviated from that of the ideal line profile and the SD-based line detection could not overcome this problem and produced low completeness and correctness. However, the SGAD-based line detection was observed to overcome this problem upto some extent and produce high completeness and correctness, when the value of SNR was high.

6. Conclusions

In this study, the performances of line detectors were evaluated by the analytical quantifications of their SNRs and completeness and correctness. The correlations arising among pixels when a Gaussian smoothing filter was applied were first identified. Then, the amount of noise remaining in the derived values, such as gradients, SD, and SGAD, was derived based on the error propagation. Furthermore, the SNRs of line detectors were analytically derived based on the derived signal and noise strengths. In addition, the penalty function was proposed to consider the influence of the blur, smoothing, and the line-width on line detectors. Verification of the validity of the derived SNRs based on investigation of the relationships of the SNRs with completeness and correctness indicates that the derivations of the SNRs of line detectors proposed in this study were effective in quantifying their performance. The validation test of SNRs was performed using nine color images and will be extended to more image sets for further test completion in the future.
Regarding feature extraction, it was observed that the edge features could be accurately extracted in the vector form using the methods proposed in [18,41]. In contrast, the line features were considered to be accurately extracted in the vector form using the SGAD method followed by the subpixel line localization and location-linking methods as described in [5].
Therefore, a set of methods for line detection, non-maxima suppression, subpixel line localization, and linking can produce high-quality ridge and valley features for various applications. Moreover, the error propagation scheme used in this study to derive the relevant theoretical SNRs can be used to develop high-performance operators for extracting features.

Funding

This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(2016R1D1A1B02011625).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Haralick, R.M. Ridges and valleys on digital images. Comput. Vis. Graph. Image Process. 1983, 22, 28–38. [Google Scholar] [CrossRef]
  2. Gauch, J.M.; Pizer, S.M. Multiresolution analysis of ridges and valleys in grey-scale images. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 635–646. [Google Scholar] [CrossRef]
  3. Serrat, J.; Lopez, A.; Lloret, D. On ridges and valleys. In Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, Spain, 3–7 September 2000. [Google Scholar]
  4. Rashe, C. Rapid contour detection for image classification. IET Image Process. 2018, 12, 532–538. [Google Scholar] [CrossRef]
  5. Seo, S. Subpixel line localization with normalizaed sums of gradients and location linking with straightness and omni-directionality. IEEE Access 2019, 7, 180155–180167. [Google Scholar] [CrossRef]
  6. Seo, S. Line-detection based on the sum of gradient angle differences. Appl. Sci. 2020, 10, 254. [Google Scholar] [CrossRef] [Green Version]
  7. Jung, C.; Kelber, C. Lane following and lane departure using a linear-parabolic model. Image Vis. Comput. 2005, 23, 1192–1202. [Google Scholar] [CrossRef]
  8. An, X.; Shang, E.; Song, J.; Li, J.; He, H. Real-time lane departure warning system based on a single FPGA. EURASIP J. Image Video Process. 2013, 38, 1–18. [Google Scholar] [CrossRef] [Green Version]
  9. Wu, P.; Chang, C.; Lin, C. Lane mark extraction for automobiles under complex conditions. Pattern Recognit. 2014, 47, 2756–2767. [Google Scholar] [CrossRef]
  10. Son, J.; Yoo, H.; Kim, S.; Sohn, K. Real-time illumination invariant lane detection for lane departure warning system. Expert Syst. Appl. 2015, 42, 1816–1824. [Google Scholar] [CrossRef]
  11. Karasulu, B. Automatic extraction of retinal blood vessels: A software implementation. Eur. Sci. J. 2012, 8, 47–57. [Google Scholar]
  12. Majumdar, J.; Kundu, D.; Tewary, S.; Ghosh, S.; Chakraborty, S.; Gupta, S. An automated graphical user interface based system for the extraction of retinal blood vessels using Kirsch’s template. Int. J. Adv. Comput. Sci. Appl. 2015, 6, 86–93. [Google Scholar] [CrossRef] [Green Version]
  13. Wang, W.; Yang, N.; Zhang, Y.; Wang, F.; Cao, T.; Eklund, P. A review of road extraction from remote sensing images. J. Traffic Transp. Eng. 2016, 3, 271–282. [Google Scholar] [CrossRef] [Green Version]
  14. Roberts, L. Machine perception of three-dimensional solids. In Optical and Electro-Optical Information Processing; Tippet, J., Berkowitz, D., Clapp, L.C., Koester, C.J., Alexander Vanderburgh, J., Eds.; MIT Press: Cambridge, MA, USA, 1965; pp. 159–197. [Google Scholar]
  15. Pingle, K. Visual perception by computer. In Automatic Interpretation and Classification of Images; Grasselli, A., Ed.; Academic Press: New York, NY, USA, 1969; pp. 277–284. [Google Scholar]
  16. Prewitt, J. Object enhancement and extraction. In Picture Processing and Psychophysics; Rosenfeld, A., Lipkin, B., Eds.; Academic Press: New York, NY, USA, 1970; pp. 75–149. [Google Scholar]
  17. Marr, D.; Hildreth, E. Theory of edge detection. Proc. R. Soc. Lond. 1980, 207, 187–217. [Google Scholar] [PubMed]
  18. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  19. Steger, C. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 113–125. [Google Scholar] [CrossRef] [Green Version]
  20. Zwiggelaar, R.; Astely, S.M.; Boggis, C.R.M.; Taylor, C.J. Linear structures in mammographic images: Detection and classification. IEEE Trans. Med Imaging 2004, 23, 1077–1086. [Google Scholar] [CrossRef]
  21. Lopez, A.M.; Lumbreras, F.; Serrat, J.; Villanueva, J.J. Evaluation of methods for ridge and valley detection. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 327–335. [Google Scholar] [CrossRef] [Green Version]
  22. Paton, K. Picture description using Legendre polynomials. Comput. Graph. Image Process. 1975, 4, 40–54. [Google Scholar] [CrossRef]
  23. Ghosal, S.; Mehrotra, R. Detection of comosite edges. IEEE Trans. Image Process. 1994, 3, 14–25. [Google Scholar] [CrossRef] [PubMed]
  24. Lindberg, T. Edge detection and ridge detection with automatic scale selection. Int. J. Comput. Vis. 1998, 30, 117–154. [Google Scholar] [CrossRef]
  25. Laptev, I.; Mayer, H.; Lindeberg, T.; Eckstein, W.; Steger, C.; Baumgartner, A. Automatic extraction of roads from aerial images based on scale space and snakes. Mach. Vis. Appl. 2000, 12, 23–31. [Google Scholar] [CrossRef] [Green Version]
  26. Jeon, B.K.; Jang, J.H.; Hong, K.S. Road detection in spaceborne SAR images using a genetic algorithm. IEEE Trans. Geosci. Remote Sens. 2002, 40, 22–29. [Google Scholar] [CrossRef]
  27. Lopez-Molina, C.; de Ulzurrun, G.V.D.; Baetens, J.; den Bulcke, J.V.; Baets, B.D. Unsupervised ridge detection using second order anisotropic Gaussian kernels. Signal Process. 2015, 116, 55–67. [Google Scholar] [CrossRef]
  28. Cornelis, B.; Ruzic, T.; Gezels, E.; Doomes, A.; Pizurica, A.; Platisa, L.; Cornelis, J.; Martens, M.; Mey, M.D.; Daubechies, I. Crack detection and impainting for virtual restoration of paintings: The case of the Ghent Altarpiece. Signal Process. 2013, 93, 605–619. [Google Scholar] [CrossRef]
  29. Yan, C.; Shao, B.; Zhao, H.; Ning, R.; Zhang, Y. 3D room layout estimation from a single RGB image. IEEE Trans. Multimed. 2020, 22, 3014–3024. [Google Scholar] [CrossRef]
  30. Lagendijk, R.L.; Tekalp, A.M.; Biemond, J. Maximum likelihood image and blur identification: A unifying approach. Opt. Eng. 1990, 29, 422–435. [Google Scholar]
  31. Reeves, S.J.; Mersereau, R.M. Blur identification by the method of generalized cross-validation. IEEE Trans. Image Process. 1992, 1, 301–311. [Google Scholar] [CrossRef] [Green Version]
  32. Savakis, A.E.; Trussell, H.J. Blur identification by residual spectral matching. IEEE Trans. Image Process. 1993, 2, 141–151. [Google Scholar] [CrossRef] [Green Version]
  33. Kundur, D.; Hatzinakos, D. Blind image deconvolution. IEEE Signal Process. Mag. 1996, 13, 43–64. [Google Scholar] [CrossRef] [Green Version]
  34. Yitzhaky, Y.; Kopeika, N.S. Identification of blur parameters from motion blurred images. Graph. Model. Image Process. 1997, 59, 310–320. [Google Scholar] [CrossRef] [Green Version]
  35. Chen, L.; Yap, K.H. Efficient discrete spatial techniques for blur support identification in blind image deconvolution. IEEE Trans. Signal Process. 2006, 54, 1557–1562. [Google Scholar] [CrossRef]
  36. Wu, S.; Lin, W.; Xie, S.; Lu, Z.; Ong, E.P.; Yao, S. Blind blur assessment for vision-based applications. J. Vis. Commun. Image Represent. 2009, 20, 231–241. [Google Scholar] [CrossRef]
  37. Hu, W.; Xue, J.; Zheng, N. PSF estimation via gradient domain correlation. IEEE Trans. Image Process. 2012, 21, 386–392. [Google Scholar] [CrossRef] [PubMed]
  38. Liu, S.; Wang, H.; Wang, J.; Cho, S.; Pan, C. Automatic blur-kernel-size estimation for motion deblurring. Vis. Comput. 2015, 31, 733–746. [Google Scholar] [CrossRef]
  39. Seo, S. Edge modeling by two blur parameters in varying contrast. IEEE Trans. Image Process. 2018, 27, 2701–2714. [Google Scholar] [CrossRef] [PubMed]
  40. Bromiley, P.A. Products and Convolutions of Gaussian Probability Density Functions; TINA: Manchester, UK, 2018. [Google Scholar]
  41. Seo, S. Subpixel edge localization based on adaptive weighting of gradients. IEEE Trans. Image Process. 2018, 27, 5501–5513. [Google Scholar] [CrossRef]
Figure 1. Line profile model used in this study.
Figure 1. Line profile model used in this study.
Applsci 11 10088 g001
Figure 2. Gaussian blur for a line model.
Figure 2. Gaussian blur for a line model.
Applsci 11 10088 g002
Figure 3. SNRC and SNRPS of edge detection. (a) SNRC(edge) and (b) SNRPS(edge).
Figure 3. SNRC and SNRPS of edge detection. (a) SNRC(edge) and (b) SNRPS(edge).
Applsci 11 10088 g003
Figure 4. SNRC and SNRPS of the SD-based line detection. (a) SNRC(line SD ); (b) SNRPS(line SD ).
Figure 4. SNRC and SNRPS of the SD-based line detection. (a) SNRC(line SD ); (b) SNRPS(line SD ).
Applsci 11 10088 g004
Figure 5. Comparison of SNRs of the ED-based and the SD-based line detections. (a) SNRPS(lineED) and (b) SNRPS(line SD )−SNRPS(lineED). In panel (b), areas with negative values are denoted with white grid lines.
Figure 5. Comparison of SNRs of the ED-based and the SD-based line detections. (a) SNRPS(lineED) and (b) SNRPS(line SD )−SNRPS(lineED). In panel (b), areas with negative values are denoted with white grid lines.
Applsci 11 10088 g005
Figure 6. Pairings for calculating gradient angle differences.
Figure 6. Pairings for calculating gradient angle differences.
Applsci 11 10088 g006
Figure 7. Neighborhood pixels for calculating gradient angle difference. (a) Column direction; (b) Diagonal direction.
Figure 7. Neighborhood pixels for calculating gradient angle difference. (a) Column direction; (b) Diagonal direction.
Applsci 11 10088 g007
Figure 8. SNRPS(lineSGAD) − SNRPS(lineSD). (a) 3-D view and (b) contours.
Figure 8. SNRPS(lineSGAD) − SNRPS(lineSD). (a) 3-D view and (b) contours.
Applsci 11 10088 g008
Figure 9. Lena image.
Figure 9. Lena image.
Applsci 11 10088 g009
Figure 10. Line detection results for the Lena subset images. First to last columns show the original images, smoothed images, ground truth line pixels, line width w, contrast k, log 10 ( SNR ) of the SD- and SGAD-based line detections when σ n = 8.0 , and line pixels detected by the SD- and SGAD-based line detections under σ n = 8.0 . First, third, and fifth rows show the results for ridges and second, fourth, and sixth row show the results for valleys.
Figure 10. Line detection results for the Lena subset images. First to last columns show the original images, smoothed images, ground truth line pixels, line width w, contrast k, log 10 ( SNR ) of the SD- and SGAD-based line detections when σ n = 8.0 , and line pixels detected by the SD- and SGAD-based line detections under σ n = 8.0 . First, third, and fifth rows show the results for ridges and second, fourth, and sixth row show the results for valleys.
Applsci 11 10088 g010
Figure 11. Example of binning results. The blue line shows the histogram under a given bin size and the red lines the boundaries of the new bins.
Figure 11. Example of binning results. The blue line shows the histogram under a given bin size and the red lines the boundaries of the new bins.
Applsci 11 10088 g011
Figure 12. Completeness and correctness against SNR for the Lena image under varying σ n . Top row shows completeness and bottom row correctness.
Figure 12. Completeness and correctness against SNR for the Lena image under varying σ n . Top row shows completeness and bottom row correctness.
Applsci 11 10088 g012
Figure 13. Completeness and correctness against SNR for the Lena image. The second and fourth plots show the zoom-in versions of the first and third plots with SNR range [0, 10], respectively.
Figure 13. Completeness and correctness against SNR for the Lena image. The second and fourth plots show the zoom-in versions of the first and third plots with SNR range [0, 10], respectively.
Applsci 11 10088 g013
Figure 14. Natural images. (Top) From left to right: Baboon, Barbara, Boats, Cablecar; (Bottom) From left to right: Flowers, Goldhill, Monarch, and Yacht.
Figure 14. Natural images. (Top) From left to right: Baboon, Barbara, Boats, Cablecar; (Bottom) From left to right: Flowers, Goldhill, Monarch, and Yacht.
Applsci 11 10088 g014
Figure 15. Completeness and correctness against SNR for the natural images. The second and fourth columns show the zoom-in versions of the plots in the first and third columns with SNR range [0, 10], respectively.
Figure 15. Completeness and correctness against SNR for the natural images. The second and fourth columns show the zoom-in versions of the plots in the first and third columns with SNR range [0, 10], respectively.
Applsci 11 10088 g015aApplsci 11 10088 g015b
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Seo, S. SNR Analysis for Quantitative Comparison of Line Detection Methods. Appl. Sci. 2021, 11, 10088. https://doi.org/10.3390/app112110088

AMA Style

Seo S. SNR Analysis for Quantitative Comparison of Line Detection Methods. Applied Sciences. 2021; 11(21):10088. https://doi.org/10.3390/app112110088

Chicago/Turabian Style

Seo, Suyoung. 2021. "SNR Analysis for Quantitative Comparison of Line Detection Methods" Applied Sciences 11, no. 21: 10088. https://doi.org/10.3390/app112110088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop