Next Article in Journal
Construction of Ultrasonic Tactile Force Feedback Model in Teleoperation Robot System
Previous Article in Journal
Application of the Gaussian Mixture Model to Classify Stages of Electrical Tree Growth in Epoxy Resin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Find Outliers of Image Edge Consistency by Weighted Local Linear Regression with Equality Constraints †

1
BIC-ESAT, Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing 100089, China
2
School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100089, China
3
Institute of Mechanical Engineering, Fuzhou University, Fuzhou 350000, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of the paper: Zhu, M.; Gao, Z.; Yu, J.; He, B.; Liu, J. ALRe: Outlier Detection for Guided Refinement. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 788–802.
Sensors 2021, 21(7), 2563; https://doi.org/10.3390/s21072563
Submission received: 11 March 2021 / Revised: 29 March 2021 / Accepted: 2 April 2021 / Published: 6 April 2021
(This article belongs to the Section Sensing and Imaging)

Abstract

:
In this paper, we propose a general method to detect outliers from contaminated estimates of various image estimation applications. The method does not require any prior knowledge about the purpose, theory or hardware of the application but simply relies on the law of edge consistency between sources and estimates. The method is termed as ALRe (anchored linear residual) because it is based on the residual of weighted local linear regression with an equality constraint exerted on the measured pixel. Given a pair of source and contaminated estimate, ALRe offers per-pixel outlier likelihoods, which can be used to compose the data weights of post-refinement algorithms, improving the quality of refined estimate. ALRe has the features of asymmetry, no false positive and linear complexity. Its effectiveness is verified on four applications, four post-refinement algorithms and three datasets. It demonstrates that, with the help of ALRe, refined estimates are better in the aspects of both quality and edge consistency. The results are even comparable to model-based and hardware-based methods. Accuracy comparison on synthetic images shows that ALRe could detect outliers reliably. It is as effective as the mainstream weighted median filter at spike detection and is significantly better at bad region detection.

1. Introduction

Edge consistency is crucial in many image processing applications producing per-pixel estimates based on source images, such as depth estimation, object segmentation and alpha matting. Although these applications have very different models and purposes, the estimates are all expected to have consistent edges with the sources.
Edge consistency has a popular description known as “output image has an edge only if input image has an edge”.The meaning of this description is twofold. Firstly, it means “estimate should be smooth if the source is smooth”.Secondly, it means “estimate could be smooth whether the source has an edge or not”.It is reasonable since most estimation processes are not single mappings. Pixels with different values might be the same after mapping. Therefore, the source and estimate have different status in the context of edge consistency.
Although the estimates are expected to fulfill edge consistency, it is not usually the case due to noises and outliers. Various algorithms have been proposed for noise suppression including global optimizations [1,2,3] and local filters [4,5]; however, outliers are less handled in the context of images [6,7,8,9].
Noises are derived from sensors and environments, leading to small biases which can usually be well described by statistical models. Outliers are caused by unexpected samples or improper designs of processing methods (termed artifacts in this situation). They might trigger large offsets and have very different forms between applications. Therefore, outliers are hard to model and suppress in a general way. Options on the table are limited:
  • Adopt median filter;
  • Ignore and treat them as noises;
  • Design prior/model/hardware based methods;
  • Train a data-driven method [10].
Median filter and its improved versions are good at removing small outlier regions such as spikes [11,12,13]. However, outliers might occupy large regions and happen to be the medians. Noise suppression methods are usually not aggressive enough to remove outliers, leaving artifacts such as halo-effect. In some applications, outlier detection methods are specifically designed based on prior, model or hardware. These methods are effective but not general. In most cases, we do not have a satisfactory option.
In this paper, we show that edge consistency itself is a valuable and under-exploited clue for general outlier detection. A hypothesize-and-verify algorithm termed ALRe (anchored linear residual) is proposed to find pixels undermining edge consistency. It offers per-pixel outlier likelihoods of estimates based on the source. The likelihoods can be transformed into inlier fidelities and then used as data weights of various post-refinement algorithms, producing better refined estimates. The algorithm requires no prior knowledge about applications or estimation method and has the complexity linear to pixel number. To the best of our knowledge, ALRe is the first general outlier detection method based on edge consistency. A preliminary version of ALRe is published in ECCV 2020 [14]. In this paper, we provide the full feature analysis and quantitative results on various applications, and compare its detection accuracy with the mainstream weighted median filter.
The rest of this paper is organized as follows. Section 2 introduces post-refinement algorithms and several limited outlier detection methods. Section 3 presents the details of ALRe. Section 4 analyzes its advantages and compares some other options. Section 5 gives examples. In Section 6, the value of ALRe as an add-on of post-refinements is investigated. Furthermore, its accuracy is tested on synthetic images. Conclusion is given in Section 7.

2. Related Works

Although corresponding outlier detection method is absent, edge consistency has been considered in various algorithms, such as WLS (weighted least squares) [2], JBF (joint bilateral filter) [4], GF (guided filter) [5] and WMF (weighted median filter) [11]. Note that these algorithms have multiple usages. The following introduction only includes post-refinement for image estimation. In this situation, they pursue a refined estimate q, which has similar intensities with the contaminated estimate p and consistent edges with the source I .

2.1. Weighted Least Squares

WLS [2] has a straightforward definition following the concept of edge consistency closely. It finds the optimal q minimizing the energy function
E ( q ) = i w i ( q i p i ) 2 + λ ( i , j ) J a i j ( q i q j ) 2 ,
where i is pixel index, J is the set of adjacent pixels, and λ balances the two terms. Smooth weight a i j is inversely correlated with the distance between I i and I j . WLS are continuously improved, such as efficient semi-global WLS [15] and constrained WLS [16], to keep pace with its applications. However, the data weight w i correlated with the fidelity of p i is usually undefined.

2.2. Joint Bilateral Filter

JBF [4] produces q by smoothing p based on I , as
q i = j Ω i K i j p j K i j = 1 Z i w i s i j c i j s i j = e x p | | x i x j | | 2 σ s 2 c i j = e x p | | I i I j | | 2 σ c 2 ,
where x is the vector of pixel coordinate, Z i is normalizing parameter, and Ω i is the local region centered at pixel i. There are two kinds of weights including distance weight s and color weight c. The parameters σ s and σ c adjust the sensitivities of the spatial and color similarities, respectively. Bilateral filter have many other variants with similar structures, such as guided bilateral filter [17] and optimally weighted bilateral filter [18]. However, the data weight w j is usually undefined.

2.3. Guided Filter

GF [5] assumes local linear relationship between q and I and then solves the optimal q closest to p. It is defined as
q i = 1 | Ω | k Ω i w i k a ( a k T I i + b k ) ( a k , b k ) = arg min a k , b k j Ω k R j k ( a k , b k ) + ϵ w k w a k T a k R j k ( a k , b k ) = w j w j k t ( a k T I j + b k p j ) 2 ,
where a and b are linear parameters, ϵ suppresses large a for smoothness, and | Ω | means the pixel number of Ω . GF has been improved into many versions. Anisotropic guided filter [19] contains the weight w i k a , and weighted guided filter [20] contains w k w . Dai et al. [21] relaxed local support region Ω to the entire image domain and introduces the weight w j k t based on minimum spanning tree. Additionally, a kind of bidirectional guided filter can be found in [22]. These methods introduce various benefits, such as stronger edge-preserving behavior and less halo-effect. However, to take advantage of pixel fidelities, another kind of weight w j is required. It is not originally included in [5] but can be easily implemented without increasing complexity.

2.4. Weighted Median Filter

WMF [11] produces q by picking values from p. It is robust to outliers because unpicked pixels have no impact on the result. WMF is defined as
h ( i , v ) = j Ω i w j w i , j δ ( p j v ) q i = v v = l v h ( i , v ) 1 2 v = l u h ( i , v ) v = l v + 1 h ( i , v ) > 1 2 v = l u h ( i , v ) ,
where δ ( x ) is 1 if x equals 0, and is 0 otherwise. The weight w i , j depends on I i and I j (in this paper, it is produced based on the kernel of guided filter). Median filter has been improved in both robustness [12] and efficiency [13]. However, they might fail when filter size is large or some outliers happen to be the medians. This problem can be improved if the fidelity of each single pixel is available. It requires the weight denoted as w j . It is not originally included in [11] but can be easily implemented without increasing complexity.

2.5. Outlier Detection

In the field of haze removal, Fattal [23] proposed color-line model and Berman et al. [24] proposed haze-line model. Outliers in their initial estimates are detected based on the variances of fitted color-lines and the effective lengths of collected haze-lines, respectively. WLS is employed for post-refinement, and the data weight w is provided based on the detection results. It introduces robustness since pixels not following their models affect the final estimates little. However, these detection methods are only applicable to corresponding models and thus cannot be generalized to other models and applications.
In the field of disparity estimation, outliers can be robustly detected by cross check [25]. Pixels having different estimates between left-to-right and right-to-left matchings are considered as outliers, and their weights are set to zeros. JBF is employed for post-refinement, whose data weight w is provided based on the detection results. Despite the robustness, cross-check is also not generalizable because it requires multiple source images captured by specific hardware.
More often, outlier detection method is absent, and the post-refinement algorithms are used without data weights. In this case, outliers are treated as noises and not well removed. However, most of them are obviously in the view of edge consistency. In this paper, we propose ALRe to realize this simple, general and effective check.

3. Method

3.1. Intuition

ALRe is based on the fact that estimate p and source I satisfy edge consistency if local linear relationship is established between them. Denote the local region centered at k by Ω k ; the local linear assumption is satisfied if
p i = a k T I i + b k , i Ω k ,
where ( a k , b k ) are linear parameters. It implies edge consistency because
Δ p i = a k Δ I i .
On the other hand, edge consistency does not always imply local linear relationship, especially when I Ω k has notably more edges than p Ω k . However, with proper mask size, Ω k contains few edges and colors; thus, the assumption is reasonable enough in most cases.
Linear regression can be used in each Ω k to approach Equation (5) as much as possible. Based on the relationship between local linear assumption and edge consistency, the smaller the residual, the better the consistency. However, simply adopting linear regression has two problems:
  • The residual of linear regression indicates the degree of edge consistency in Ω k , rather than the fidelity of the single pixel k. It can not help post-refinement algorithms;
  • All the pixels in Ω k are considered as inliers. Outliers have strong impacts to the regression, especially when the least square regression is used.
To solve these problems, we refer to RANSAC (random sample consensus) [26], which is a hypothesize-and-verify algorithm that firstly assumes inliers and then calculates the fidelity of the inlier assumption based on all the samples. We firstly make inlier assumption at each pixel k and then evaluate the assumption in Ω k . For each pixel k, we
  • Assume edge consistency in Ω k ;
  • Assume p k is an inlier;
  • Evaluate the two assumptions by weighted regression;
  • Calculate the inlier fidelity of p k based on the residual.
When it is complete, a fidelity map that has the same size as p is available. It is transformed into data weights w and then used in the next round regressions to further suppress the impacts of outliers.

3.2. Algorithm

The edge consistency assumption and the inlier assumption imply a small residual e defined as
e k = min a k , b k 1 i Ω k w i i Ω k w i ( a k T I i + b k p i ) 2 p k = a k T I k + b k .
The inlier fidelity w is negatively correlated with e as
w k = 1 max ( LB , min ( UB , e k ) ) 1 UB + ϵ 1 LB 1 UB + ϵ ,
where ( LB , UB ) are the lower bound and upper bounds of e . When e k is out of the bounds, pixel k is considered as absolute inlier and outlier, respectively. ϵ is a small number for numerical stability.
The energy function and the equality constraint of Equation (7) can be combined into
e k = min a k i Ω k w i a k T ( I i I k ) ( p i p k ) 2 i Ω k w i .
By further induction, we have
C k = i Ω k w i ( I i I k ) ( I i I k ) T d k = i Ω k w i ( p i p k ) ( I i I k ) a k = C k 1 d k b k = p k a k T I k .
In programming, it is
C k = ( w I I T ¯ ) k + ( w ¯ ) k I k I k T ( w I ¯ ) k I k T I k ( w I T ¯ ) k + ϵ d k = ( w p I ¯ ) k p k ( w I ¯ ) k ( w p ¯ ) k I k + ( w ¯ ) k p k I k a k = C k 1 d k b k = p k a k T I k e k = ( a k T ( w I I T ¯ ) k a k + ( w ¯ ) k b k 2 + ( w p 2 ¯ ) k + 2 b k a k T ( w I ¯ ) k 2 a k T ( w p I ¯ ) k 2 b k ( w p ¯ ) k ) / ( w ¯ ) k + ϵ ,
where ϵ is a diagonal matrix whose elements all equal ϵ , ( p ¯ ) k is the mean value of p in Ω k , and so do the others.
Since e and w are interdependent, an iteration strategy with w 0 = 1 is employed as
w 0 w t e t + 1 w t + 1 ,
and the terminal condition is
Δ e t + 1 = k | e k t + 1 e k t | < ϵ .
In practice, the iteration number is usually 5∼10.
The overall algorithm is summarized in Algorithm 1. The final w is inlier fidelity and 1 w is outlier likelihood. Note that, those mean values can be calculated by boxfilter with O ( N ) complexity, where N is the pixel number. Other operations in Equation (8) and Equation (11) are O ( N ) too. The number of iterations is independent of N. Therefore, the algorithm is O ( N ) overall. Compared to linear regression without equality constraint [5], which leads to a solution similar to Equation (10), only several mean values are replaced by particular ones. The runtime is even slightly decreased. When handling 640×480 images, each iteration of our ALRe demon (provided in https://github.com/Lilin2015/Author—ALRe (accessed on 5 April 2021)) takes about 0.3 s.
Algorithm 1 ALRe
Require: 
I , p , ϵ , LB , UB
Ensure: 
w
 1:
set t = 0 , Δ e t = 1
 2:
set e t = 1 , w t = 1 for all k
 3:
while Δ e t ϵ do
 4:
  calculate w t ¯ , w t p ¯ , w t I ¯ , w t p 2 ¯ , w t I I T ¯ , w t p I ¯ by boxfilter
 5:
  for each k do
 6:
   calculate e k t + 1 by Equation (11) using w k = w k t
 7:
   calculate w k t + 1 by Equation (8) using e k = e k t + 1
 8:
  end for
 9:
   Δ e t + 1 = k | e k t + 1 e k t |
 10:
   t = t + 1
 11:
end while
 12:
w = w t

4. Analysis

4.1. Asymmetry

Linear transformations on I and shifts on p have no impact on the result of w. Firstly, with fixed w, the conclusion of
e ( p + β p , α I I + β I ) = e ( p , I )
can be proven based on Equation (10) as
I ˜ = α I T I + β I p ˜ = p + β p C ˜ = α I 2 C d ˜ = α I d a ˜ = a α I b ˜ = b e ˜ = e .
Then, the same e results in the same w.
On the other hand, with fixed w, a scaling on p leads to
e ( α p p , I ) = α p 2 e ( p , I ) .
If we simply consider the inverse relationship between e and w, it results in w ˜ = w / α p , but the real situation is more complex due to the truncated mapping. Anyway, it is clear that ALRe has asymmetric responds to I and p.
The asymmetry fulfills the concept of edge consistency. Contemplate a pair of I and p with modest sharpness
  • When α p is large, p has sharp edges, e is small only if I and p closely follow the local linear assumption because of the large factor a p 2 in Equation (16). It corresponds to the description “estimate should be smooth if the source is smooth”;
  • When α p is small, p is smooth, e is small because of the small a p 2 . The sharpness of I is unessential. It corresponds to the description “estimates could be smooth whether the source has an edge or not”.
In most applications, I has much more edges than p, but it does not always lead to small w because of this asymmetry.

4.2. No False Positive

The main difference between each iteration of ALRe and LRe (linear residual, produced by naive least square regression) is the equality constraint. Previous discussion shows that introducing this constraint does not cost the feature of asymmetry. However, what is the benefit?
To answer this question, we conduct simulations. Firstly, random I and ( a , b ) are generated in [ 0 , 1 ] . Then, p is calculated by p k = a k T I k + b k . Based on a preset bad pixel ratio, some pixels of p are modified by random biases in [ 0.05 , 0.5 ] . Ten levels of bad pixel ratio, from 5% to 50%, are tested. For each level, 10 5 sources and contaminated estimates are generated. Linear residuals of bad pixels with and without equality constraint are recorded.
The results are shown in Figure 1, where LRe means the naive linear residual, and ALRe means the anchored linear residual of the first iteration.
The results of four levels of bad pixel ratio are displayed in Figure 1a–d respectively. As expected, both LRe and ALRe increase when the bias increases. LRe is more sensitive to large biases and ALRe is more sensitive to small biases. However, as the bad pixel ratio increases, mean LRe decreases while mean ALRe keeps stable or even slightly increases. Figure 1e illustrates this phenomenon, where the slope of ALRe is obviously smaller. It means that ALRe is more robust to bad pixel ratio. This is the first benefit.
More importantly, if we further investigate the minimal residual of each test, it comes out that LRe has a false positive problem. The minimal LRe and ALRe are drawn as thin curves in Figure 1a–d. As can be seen, some segments of the blue curves are overlapped with x-axis when the biases are small. The larger the bad pixel ratio, the longer the overlapped segment. For convenience, we term the x-value of the right end of the overlapped segment as false positive threshold, since bad pixels with biases smaller than this value might have zero residuals and be recognized as inliers. Figure 1f illustrates this phenomenon, where the false positive threshold of LRe increases when the bad pixel ratio increases. As a comparison, ALRe has no false positive problem. This is the second and the major benefit.

5. Applications

We provide four examples of using ALRe, including haze removal, depth estimation, feathering and edge-preserving smoothing. ALRe affects these applications by providing or replacing the data weights in their post-refinements.

5.1. Transmission Refinement for Haze Removal

In the field of haze removal, hazy images are considered as haze-free images attenuated by atmospheric lights and transmission maps represents the attenuation ratios. With evenly dispersed haze, attenuation ratios are related to scene depths. Therefore, transmission edges should be consistent with depth edges. Since depth edges are unavailable and mostly happen on color edges, transmission maps are expected to have edge consistency with hazy images.
However, limited by existing technologies, transmission maps usually have unsatisfactory edge consistency; thus a post-refinement based on hazy images is popular. Two examples based on dark channel prior [27] and haze-line model [24] are illustrated in Figure 2. Initial transmission map based on dark channel prior has the problem of block effect, which indicates over-estimated transmissions in the vicinity of large depth jumps [27]. As displayed in Figure 2a, edges of the two images are misaligned near the skyline, where large transmission values of the distant buildings are dilated. Initial transmission map based on haze-line model is estimated using the assumption that the end of each haze-line is haze-free [24]. The assumption is unreliable for short haze-lines, introducing isolated outliers, such as the over-estimated pixels on the sky and roof.
To address the problem of block effect, Zhu et al. [28] detect outliers based on an improved local constant assumption customized for dark channel prior and then employs WLS. As shown in Figure 2c, with correctly revealed outliers, the block effect is well removed without introducing halo-effect. Since the initial transmission map based on haze-line model is unreliable on short haze-lines, Berman et al. [24] detect outliers based on the effective length of haze-line and then employ WLS. As shown in Figure 2d, outliers are correctly revealed, and the transmission map is well refined.
Despite the effectiveness, these outlier detection methods are based on the deep understanding of the assumption, prior and model applied in estimation process. As a comparison, ALRe has similar performance without any of these knowledges. As shown in Figure 2e, according to the law of edge consistency, outliers are also revealed. The refined results based on WLS have trivial differences compared to Zhu et al. [28] and Berman et al. [24].

5.2. Refinements in Depth Estimation

Disparity refers to the difference in image locations of a point seen from different views. Disparity maps are inversely proportional to depth maps, whose edges are consistent with color edges. Therefore, they are also expected to have edge consistency with color images. Disparities can be estimated by stereo matching. However, it might be false or invalid on several pixels due to occlusions. An example from Middlebury dataset 2005 [29,30] is shown in Figure 3a (one of a pair). The initial disparity map in Figure 3d is generated by Hosni et al. [25] (without refinement). Outliers can be seen on the left side of most dolls.
Hosni et al. [25] traced outliers by cross check, which requires the image of another view. ALRe is applicable without this extra information. The binary results of these two methods are shown in Figure 3b,c, and the refined maps are displayed in Figure 3e,f. As can be seen, although the seams are missed, ALRe reveals most occluded regions. The refined results are similar and comparable even though ALRe has much less information.
Cross check is not always valid because it requires at least two views of the same object. It is invalid to single view equipment such as RGB-D camera. The depth map of RGB-D camera usually contains unstable edges due to the shifted position and low resolution of depth camera, as illustrated in Figure 4a,b. This problem can be solved by WMF. As shown in Figure 4d, the winding edges are well regularized, but the values of the pointed regions are wrong picked. These regions have zero values because they are invisible to the depth camera and thus should not be considered in picking. With the help of ALRe, these regions are trivial in WMF, and a more convincing result is achieved.

5.3. Feathering

Image feathering produces an alpha matte of complex object based on rough binary mask, which can be obtained manually or from other segmentation methods. GF is an efficient tool but not error-tolerant enough since it treats all pixels equally. Masks with obvious errors might lead to halo-effects. As shown in Figure 5d, the result of GF inherits the errors from Figure 5b, leading to the over-estimated and under-estimated results marked by A and B, respectively.
As displayed in Figure 5c, the weights of the mask are very low near the boundaries. It is not surprising since rough masks are never expected to have consistent edges with color images. However, with a closer look, phantom of the edges from both images can be observed, and the regions between them have almost zero fidelities. With this message, a more convincing matte is produced as shown in Figure 5e.

5.4. Edge-Preserving Smoothing

Edge-preserving smoothing aims to erase weak edges but preserve strong ones from images. As an edge-preserving filter, GF has the problem of halo-effects. Various methods have been proposed for this problem and various weights have been investigated as described in Equation (3).
ALRe also has the potential to discover strong edges. Firstly, we smooth sharp input by low-pass filter, as shown in Figure 6b. After this indiscriminatingly suppression, weak edges are almost erased but strong edges are still observable. Then, we investigate the edge consistency between the sharp input and the smoothed result. Stronger edges contribute larger residuals, as described in Equation (16) and confirmed in Figure 6c and thus can be recognized and enhanced back.
Therefore, with the help of ALRe, edge-preserving can be achieved by enhancing smoothed result guided by sharp input. In this process, GF copies strong edges back in low weight regions. As shown in Figure 6e, the lawn is as smooth as the one in Figure 6d, but the halo-effects near the skyline are avoided.

5.5. Summary

Different from noises and biases, outliers are from theoretical or system flaws. They are usually far away from true results and severe against the law of edge consistency. As shown in the examples above, outliers introduce different estimates on the same object, large value jump between similar colors or unexpected edges on a flat region. Most of them can be found without any knowledge but simply the expectation of edge consistency. Compared to prior-based, model-based and hardware-based outlier detection methods, ALRe is unlikely to be the best. However, as a general method, ALRe can be used when specific method is absent, which is a considerable situation. Furthermore, it can be used an extra add-on of existing methods.

6. Experiments

In this section, we conduct two kinds of quantitative experiments. Firstly, the value of ALRe as an add-on of transmission refinement, disparity refinement and feathering is investigated. Secondly, its outlier detection accuracy is measured and compared with WMF (the kernel of guided filter is adopted) based on synthetic images.
In the first part, D-HAZY [31], Middlebury [32] and Alpha Matting [33] datasets are employed (They are public datasets. The permissions can be found on the dataset websites. D-HAZY: http://ancuti.meo.etc.upt.ro/D_Hazzy_ICIP2016/ (accessed on 5 April 2021); Middlebury: https://vision.middlebury.edu/stereo/data/ (accessed on 5 April 2021); Alpha Matting: http://www.alphamatting.com/index.html (accessed on 5 April 2021)).Comparisons are made based on rough outputs, refined outputs without ALRe and refined outputs with ALRe. The index used to measure edge consistency is ESR (mean edge strength ratio).
ESR ( q , I ) = 1 N k q | G ( q k ) | + ϵ | G ( I k ) | + ϵ .
ESR exactly follows the concept of edge consistency but lacks a reasonable minimum. For example, results with constant values always have minimal ESR. Therefore, to make sure what ALRe introduces are improvements rather than over-smoothing, additional metrics measuring output qualities are required. In both parts, the ϵ , LB and UB are 0.001, 0.01 and 0.3, respectively. The size of Ω depends.

6.1. Transmission Refinement

D-HAZY [31] is a popular dataset which provides abundant hazy and haze-free image pairs. It is built based on Middlebury [32] and NYU Depth [34] respectively. In this comparison, we only use the Middlebury part because of the significantly higher quality. The MSE (mean square error) between haze removal result and groundtruth is used for quality measurement.
In practice, we found out that global biases dominate the results of Berman et al. [24], while outliers are trivial. As a comparison, the outliers of block effect in the rough outputs of Zhu et al. [28] challenge both post-refinement and ALRe. Therefore, the experiment is based on Zhu et al. [28]. We exactly follow the configurations of Zhu et al. [28], and the Ω for ALRe has the same size (51×51). For each sample, we produce the rough transmission map with block effect, the refined map based on WLS without ALRe and the refined map with ALRe. Transmission maps are evaluated by ESR, and haze removal results are evaluated by MSE.
The results are displayed in Figure 7, where the result of Zhu et al. [28] is marked as WLS+WCA (weighted constant assumption). As can be observed, naive WLS improves edge consistency but barely improves haze removal quality. With the help of ALRe, both improvements are significant and even comparable to Zhu et al. [28].

6.2. Disparity Refinement

Middlebury [32] is a widely used dataset with stereo images and groundtruth disparities. The ER (error ratio) of estimated disparity map is used for quality measurement. It is the percentage of bad pixels with absolute error no less than 1 (pixels with unknown disparity are excluded from statistics). As pointed out by Ma et al. [11], textureless region dominate the statistics while outliers are trivial. Therefore, we only use the 2001 dataset, where the ERs of initial results are all less than 5.
The size of WMF is 11×11 based on the configuration of Ma et al. [11]. A larger WMF could erase more outliers but also smear image contents. However, finding outliers before modifications based on a large ALRe is safe. Therefore, we set the size of ALRe to 21×21. For each sample, we produce the rough disparity map based on cost-volume method [11] (boxfilter for cost aggregation), the refined map based on WMF without ALRe and the refined map with ALRe. Disparity maps are evaluated by ESR and ER.
The results of each sample are displayed in Figure 8. The y-axis is ln(ESR). As can be inspected, WMF introduces significant boosts on both edge consistency and accuracy, and ALRe is able to introduce further improvements. An example of this small but stable improvement is displayed in Figure 9. As can be observed in Figure 9c,d, WMF with ALRe produces less spikes.

6.3. Feathering

Alpha Matting [33] is a dataset providing abundant images and groundtruth alpha mattes. The MSE between estimated matte and groundtruth is used for quality measurement. As shown in Figure 11d, the rough mask of Alpha matting dataset [33] is triple, including foreground region, background region and unknown pixels. To produce rough binary masks, we fill each unknown pixel by the label of its closest region. To prevent over-large errors, samples containing unknown pixels far from both regions (the Euclidian distance is more than 25) are excluded. Finally, the five samples in Figure 10 are selected.
The size of GF is 35 × 35. A larger GF could cover more reliable labels but worsen the halo-effect and texture-copy problems. However, finding outliers based on a large ALRe is safe. Therefore, we use a 51 × 51 ALRe, which can stride over the largest unknown region. For each sample, we produce the rough binary mask, the refined matte based on GF without ALRe and the refined matte with ALRe.
The results of each sample are displayed in Figure 10, where GF introduces improvements on both aspects and ALRe helps GF to introduce more. An example is illustrated in Figure 11. The result of GF shown in Figure 11b has obvious halo-effect and texture-copy problems due to its large size. However, the large size is necessary because the large unknown region. If we dilate the foreground region, the problems become less obvious but still exist, as shown in Figure 11c. As shown in Figure 11e and Figure 11f, with the help of ALRe, the problems are solved, and the refined mattes are barely affected by the dilation.

6.4. Outlier Detection

The IoU (intersection over union) of outlier detection result and groundtruth is used for evaluation.
IoU ( GT , MASK ) = | GT MASK | | GT MASK | .
MASK and GT are binary images, whose pixel x equals 1 if p ( x ) is asserted as an outlier. The MASK of ALRe is
MASK ALRe ( p , I ) k = 1 , ALRe ( p , I ) k < 0.05 0 , ALRe ( p , I ) k 0.05 .
The MASK of WMF is produced based on the intuition that pixels greatly changed after filtering are outliers; thus,
MASK WMF ( p , I ) k = 1 , | WMF ( p , I ) p | k > 0.3 0 , | WMF ( p , I ) p | k 0.3 .
Source images I are provided by Middlebury 2014 [35]. The groundtruth estimate q is produced based on parameter map ( a , b ) as q k = a k T I + b k . The map ( a , b ) is generated by smoothing a fully random matrix by R × R boxfilter. Contaminated estimate p is synthesized by adding outliers to q. Two kinds of outliers are tested, including spikes and regions. The number of outliers is controlled by M.
The task is more challenging with smaller R and larger M. We tested 22 kinds of R and 3 kinds of M. The size of Ω for ALRe and WMF is 25×25 (the inputs are 640×480). Larger R leads to larger mean ( ALRe ( q , I ) ) , which indicates the fidelity of the local linear assumption. It is the cornerstone of ALRe, and therefore, we term it as ALRe expectation.
The results are illustrated in Figure 12. As can be seen, both ALRe and WMF achieve high accuracy at spike detection. ALRe fails when the ALRe expectation is lower than 0.2. The fewer the outliers, the worse the IoU. It means that ALRe has a true negative problem when the local linear assumption is not established. However, ALRe performs significantly better than WMF at bad region detection. The mean IoUs of ALRe corresponding to the increasing M are 0.978, 0.958, 0.868, while the ones of WMF are 0.814, 0.727, 0.556. The gap is approximately 0.2. ALRe is robust in this task even though the ALRe expectation is very low. For example, detecting outliers from Figure 13b (the ALRe expectation is less than 0.2) is very challenging. WMF only achieves an IoU equals 0.6508 because many outliers happen to be the medians. However, ALRe remains a high IoU equals 0.9027. The undesired hollows in MASK WMF are mostly avoided in MASK ALRe . Furthermore, the effects of source contrast and outlier strength are tested by shrinking the intensities of p and I toward their mean values. As shown in Figure 12, ALRe maintains high IoUs until the contrast is lower than 70% or the outlier strength is lower than 60%.

7. Conclusions

In this paper, we address the problem of general outlier detection. Edge consistency is exploited, based on which, a hypothesize-and-verify algorithm termed ALRe is proposed. Analysis and comparison with other options shows that ALRe has good features including asymmetry, no false positive problem and linear complexity. Examples on four applications show that ALRe could provide meaningful detection result and even replace specifically designed method without obvious quality degradation. Experiments on three datasets demonstrate that ALRe could help post-refinement algorithms to further improve contaminated estimates. Experiments on synthetic images show that ALRe is as effective as the mainstream WMF at spike detection and is significantly better at bad region detection. The improvements measured by IoU are about 0.2.
In conclusion, ALRe is a feasible solution of the general outlier detection problem. It can be used when the estimation process is unknown, when a specific method is absent, or simply as an extra add-on of post-refinement algorithms.

Author Contributions

Funding acquisition, J.Y.; Methodology, M.Z.; Software, Y.H.; Validation, J.L.; Visualization, B.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key Research and Development Program of China under Grant 2020YFB1312800, and the National Natural Science Foundation of China under Grant 62003007.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study in Section 6.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ouahabi, A. A review of wavelet denoising in medical imaging. In Proceedings of the 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA), Algiers, Algeria, 12–15 May 2013; pp. 19–26. [Google Scholar]
  2. Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graphic. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  3. Levin, A.; Lischinski, D.; Weiss, Y. A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 228–242. [Google Scholar] [CrossRef] [Green Version]
  4. Petschnigg, G.; Szeliski, R.; Agrawala, M.; Cohen, M.; Hoppe, H.; Toyama, K. Digital photography with flash and no-flash image pairs. ACM Trans. Graphic. 2004, 23, 664–672. [Google Scholar] [CrossRef]
  5. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  6. Chakhchoukh, Y.; Lei, H.; Johnson, B.K. Diagnosis of outliers and cyber attacks in dynamic PMU-based power state estimation. IEEE Trans. Power Syst. 2019, 35, 1188–1197. [Google Scholar] [CrossRef]
  7. Angiulli, F.; Basta, S.; Lodi, S.; Sartori, C. GPU strategies for distance-based outlier detection. IEEE Trans. Parallel Distrib. Syst. 2016, 27, 3256–3268. [Google Scholar] [CrossRef]
  8. Angiulli, F.; Basta, S.; Pizzuti, C. Distance-based detection and prediction of outliers. IEEE Trans. Knowl. Data Eng. 2005, 18, 145–160. [Google Scholar] [CrossRef]
  9. Takeuchi, J.I.; Yamanishi, K. A unifying framework for detecting outliers and change points from time series. IEEE Trans. Knowl. Data Eng. 2006, 18, 482–492. [Google Scholar] [CrossRef]
  10. Ouahabi, A.; Taleb-Ahmed, A. Deep learning for real-time semantic segmentation: Application in ultrasound imaging. Pattern Recognit. Lett. 2021, 144, 27–34. [Google Scholar] [CrossRef]
  11. Ma, Z.; He, K.; Wei, Y.; Sun, J.; Wu, E. Constant time weighted median filtering for stereo matching and beyond. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 49–56. [Google Scholar]
  12. Aranda, L.A.; Reviriego, P.; Maestro, J.A. Error detection technique for a median filter. IEEE Trans. Nucl. Sci. 2017, 64, 2219–2226. [Google Scholar] [CrossRef]
  13. Green, O. Efficient scalable median filtering using histogram-based operations. IEEE Trans. Image Process. 2017, 27, 2217–2228. [Google Scholar] [CrossRef]
  14. Zhu, M.; Gao, Z.; Yu, J.; He, B.; Liu, J. ALRe: Outlier Detection for Guided Refinement. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 788–802. [Google Scholar]
  15. Liu, W.; Chen, X.; Shen, C.; Liu, Z.; Yang, J. Semi-global weighted least squares in image filtering. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5861–5869. [Google Scholar]
  16. Dong, L.; Zhou, J.; Tang, Y.Y. Effective and fast estimation for image sensor noise via constrained weighted least squares. IEEE Trans. Image Process. 2018, 27, 2715–2730. [Google Scholar] [CrossRef] [PubMed]
  17. Caraffa, L.; Tarel, J.P.; Charbonnier, P. The guided bilateral filter: When the joint/cross bilateral filter becomes robust. IEEE Trans. Image Process. 2015, 24, 1199–1208. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Chaudhury, K.N.; Rithwik, K. Image denoising using optimally weighted bilateral filters: A sure and fast approach. In Proceedings of the 2015 IEEE International Conference on Image Processing, Quebec City, QC, Canada, 27–30 September 2015; pp. 108–112. [Google Scholar]
  19. Ochotorena, C.N.; Yamashita, Y. Anisotropic Guided Filtering. IEEE Trans. Image Process. 2019, 29, 1397–1412. [Google Scholar] [CrossRef]
  20. Li, Z.; Zheng, J.; Zhu, Z.; Yao, W.; Wu, S. Weighted guided image filtering. IEEE Trans. Image Process. 2014, 24, 120–129. [Google Scholar]
  21. Dai, L.; Yuan, M.; Zhang, F.; Zhang, X. Fully connected guided image filtering. In Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 352–360. [Google Scholar]
  22. Guo, X.; Li, Y.; Ma, J.; Ling, H. Mutually guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 1283–1290. [Google Scholar]
  23. Fattal, R. Dehazing using color-lines. ACM Trans. Graphic. 2014, 34, 1–14. [Google Scholar] [CrossRef]
  24. Berman, D.; Treibitz, T.; Avidan, S. Non-local image dehazing. In Proceedings of the 2016 IEEE International Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar]
  25. Hosni, A.; Rhemann, C.; Bleyer, M.; Rother, C.; Gelautz, M. Fast cost-volume filtering for visual correspondence and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 504–511. [Google Scholar] [CrossRef]
  26. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  27. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar]
  28. Zhu, M.; He, B.; Liu, J.; Yu, J. Boosting Dark Channel Dehazing via Weighted Local Constant Assumption. Signal Process. 2020, 171, 107453. [Google Scholar] [CrossRef]
  29. Scharstein, D.; Pal, C. Learning conditional random fields for stereo. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2016. [Google Scholar] [CrossRef]
  30. Hirschmuller, H.; Scharstein, D. Evaluation of cost functions for stereo matching. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2016; pp. 1–8. [Google Scholar] [CrossRef]
  31. Ancuti, C.; Ancuti, C.O.; De Vleeschouwer, C. D-hazy: A dataset to evaluate quantitatively dehazing algorithms. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2226–2230. [Google Scholar]
  32. Scharstein, D.; Szeliski, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vision 2002, 47, 7–42. [Google Scholar] [CrossRef]
  33. Rhemann, C.; Rother, C.; Wang, J.; Gelautz, M.; Kohli, P.; Rott, P. A perceptually motivated online benchmark for image matting. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1826–1833. [Google Scholar]
  34. Silberman, N.; Hoiem, D.; Kohli, P.; Fergus, R. Indoor segmentation and support inference from rgbd images. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 746–760. [Google Scholar]
  35. Scharstein, D.; Hirschmüller, H.; Kitajima, Y.; Krathwohl, G.; Nešić, N.; Wang, X.; Westling, P. High-resolution stereo datasets with subpixel-accurate ground truth. In Proceedings of the German Conference on Pattern Recognition, Munster, Germany, 2–5 September 2014; pp. 31–42. [Google Scholar]
Figure 1. Bad pixel residuals of linear regression with and without equality constraint. Marked as ALRe and LRe, respectively. (ad) Mean and minimal residuals of bad pixels with different biases; (e) Mean residuals of bad pixels under different configurations of bad pixel ratio; (f) Under different bad pixel ratios, the thresholds that pixels with smaller biases might have zero residuals.
Figure 1. Bad pixel residuals of linear regression with and without equality constraint. Marked as ALRe and LRe, respectively. (ad) Mean and minimal residuals of bad pixels with different biases; (e) Mean residuals of bad pixels under different configurations of bad pixel ratio; (f) Under different bad pixel ratios, the thresholds that pixels with smaller biases might have zero residuals.
Sensors 21 02563 g001
Figure 2. Transmission map refinements. Transmission and weight maps are displayed in color. The warmer the color, the larger the value. (a,b) Hazy images and the initial transmission maps based on dark channel prior [27] and haze-line model [24] respectively; (c,d) The weight maps and the refined results based on Zhu et al. [28] and Berman et al. [24] respectively; (e) The weight maps based on ALRe and corresponding results.
Figure 2. Transmission map refinements. Transmission and weight maps are displayed in color. The warmer the color, the larger the value. (a,b) Hazy images and the initial transmission maps based on dark channel prior [27] and haze-line model [24] respectively; (c,d) The weight maps and the refined results based on Zhu et al. [28] and Berman et al. [24] respectively; (e) The weight maps based on ALRe and corresponding results.
Sensors 21 02563 g002
Figure 3. Disparity map refinements. Disparity and weight maps are displayed in color. The warmer the color, the larger the value. (a) One of the two stereo images; (b) the weight map based on cross check; (c) the weight map based on ALRe (the segmentation threshold is 0.2); (d) initial disparity map based on Hosni et al. [25]; (e) the refined map based on (b); (f) the refined map based on (c).
Figure 3. Disparity map refinements. Disparity and weight maps are displayed in color. The warmer the color, the larger the value. (a) One of the two stereo images; (b) the weight map based on cross check; (c) the weight map based on ALRe (the segmentation threshold is 0.2); (d) initial disparity map based on Hosni et al. [25]; (e) the refined map based on (b); (f) the refined map based on (c).
Sensors 21 02563 g003
Figure 4. Depth map refinements. Depth and weight maps are displayed in color. The warmer the color, the larger the value. (a) Color image; (b) rough depth map; (c) the weight map based on ALRe; (d) the refined result based on WMF without ALRe; (e) corresponding result with ALRe.
Figure 4. Depth map refinements. Depth and weight maps are displayed in color. The warmer the color, the larger the value. (a) Color image; (b) rough depth map; (c) the weight map based on ALRe; (d) the refined result based on WMF without ALRe; (e) corresponding result with ALRe.
Sensors 21 02563 g004
Figure 5. Feathering. Weight map is displayed in color. The warmer the color, the larger the value. (a) Input image; (b) rough mask; (c) the weight map based on ALRe; (d) the refined result based on GF without ALRe; (e) corresponding result with ALRe.
Figure 5. Feathering. Weight map is displayed in color. The warmer the color, the larger the value. (a) Input image; (b) rough mask; (c) the weight map based on ALRe; (d) the refined result based on GF without ALRe; (e) corresponding result with ALRe.
Sensors 21 02563 g005
Figure 6. Edge preserving filtering. Weight map is displayed in color. The warmer the color, the larger the value. (a) Input image; (b) the smoothed input image based on Gaussian low-pass filter; (c) the weight map of (b) based on ALRe; (d) the smoothed result of (a) guided by itself based on GF; (e) the enhanced result of (b) guided by (a) based on GF with ALRe.
Figure 6. Edge preserving filtering. Weight map is displayed in color. The warmer the color, the larger the value. (a) Input image; (b) the smoothed input image based on Gaussian low-pass filter; (c) the weight map of (b) based on ALRe; (d) the smoothed result of (a) guided by itself based on GF; (e) the enhanced result of (b) guided by (a) based on GF with ALRe.
Sensors 21 02563 g006
Figure 7. Effect of ALRe on WLS in transmission refinement. The ends of the orange line are the mean errors of rough and refined outputs of Zhu et al. [28]. The ends of the blue lines are the errors of WLS with and without ALRe.
Figure 7. Effect of ALRe on WLS in transmission refinement. The ends of the orange line are the mean errors of rough and refined outputs of Zhu et al. [28]. The ends of the blue lines are the errors of WLS with and without ALRe.
Sensors 21 02563 g007
Figure 8. Effect of ALRe on WMF in disparity post-refinement. The errors of cost-volume method [25] are marked by the dots with sample names. Each first linked dot is the error of WMF, and the next dot is the error of WMF with ALRe.
Figure 8. Effect of ALRe on WMF in disparity post-refinement. The errors of cost-volume method [25] are marked by the dots with sample names. Each first linked dot is the error of WMF, and the next dot is the error of WMF with ALRe.
Sensors 21 02563 g008
Figure 9. Disparity post-refinement using WMF with and without ALRe. Disparity and weight maps are displayed in color. The warmer the color, the larger the value. (a) One of the two stereo images; (b) the rough output of cost-volume [25]; (c) the refined result of (b) guided by (a) based on WMF without ALRe; (d) corresponding result with ALRe.
Figure 9. Disparity post-refinement using WMF with and without ALRe. Disparity and weight maps are displayed in color. The warmer the color, the larger the value. (a) One of the two stereo images; (b) the rough output of cost-volume [25]; (c) the refined result of (b) guided by (a) based on WMF without ALRe; (d) corresponding result with ALRe.
Sensors 21 02563 g009
Figure 10. Effect of ALRe on matting. The errors of initial masks are marked by sample names. Each first linked dot is the error of GF, and the next is the error of GF with ALRe.
Figure 10. Effect of ALRe on matting. The errors of initial masks are marked by sample names. Each first linked dot is the error of GF, and the next is the error of GF with ALRe.
Sensors 21 02563 g010
Figure 11. Feathering. (a) Color image; (b) the result of GF; (c) the result of GF with dilated foreground region; (d) the trimap; (e,f) corresponding results with ALRe.
Figure 11. Feathering. (a) Color image; (b) the result of GF; (c) the result of GF with dilated foreground region; (d) the trimap; (e,f) corresponding results with ALRe.
Sensors 21 02563 g011
Figure 12. Outlier detection accuracy. Left, spike detection; Middle, bad region detection; Right, the effects of source contrast and outlier strength.
Figure 12. Outlier detection accuracy. Left, spike detection; Middle, bad region detection; Right, the effects of source contrast and outlier strength.
Sensors 21 02563 g012
Figure 13. Comparison of ALRe and WMF on bad region detection. (a) Source image; (b) contaminated estimate; (c) detection result of WMF, IoU = 0.6508; (d) detection result of ALRe, IoU = 0.9027; (e) ground truth.
Figure 13. Comparison of ALRe and WMF on bad region detection. (a) Source image; (b) contaminated estimate; (c) detection result of WMF, IoU = 0.6508; (d) detection result of ALRe, IoU = 0.9027; (e) ground truth.
Sensors 21 02563 g013
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhu, M.; Hu, Y.; Yu, J.; He, B.; Liu, J. Find Outliers of Image Edge Consistency by Weighted Local Linear Regression with Equality Constraints. Sensors 2021, 21, 2563. https://doi.org/10.3390/s21072563

AMA Style

Zhu M, Hu Y, Yu J, He B, Liu J. Find Outliers of Image Edge Consistency by Weighted Local Linear Regression with Equality Constraints. Sensors. 2021; 21(7):2563. https://doi.org/10.3390/s21072563

Chicago/Turabian Style

Zhu, Mingzhu, Yaoqing Hu, Junzhi Yu, Bingwei He, and Jiantao Liu. 2021. "Find Outliers of Image Edge Consistency by Weighted Local Linear Regression with Equality Constraints" Sensors 21, no. 7: 2563. https://doi.org/10.3390/s21072563

APA Style

Zhu, M., Hu, Y., Yu, J., He, B., & Liu, J. (2021). Find Outliers of Image Edge Consistency by Weighted Local Linear Regression with Equality Constraints. Sensors, 21(7), 2563. https://doi.org/10.3390/s21072563

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop