Next Article in Journal
Enhanced Parameter Estimation with Periodically Driven Quantum Probe
Next Article in Special Issue
Sparse Estimation Strategies in Linear Mixed Effect Models for High-Dimensional Data Application
Previous Article in Journal
Feature Selection for Regression Based on Gamma Test Nested Monte Carlo Tree Search
Previous Article in Special Issue
Cancer Research Trend Analysis Based on Fusion Feature Representation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Edge-Preserving Denoising of Image Sequences

Department of Biostatistics, University of Florida, Gainesville, FL 32603, USA
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(10), 1332; https://doi.org/10.3390/e23101332
Submission received: 2 September 2021 / Revised: 7 October 2021 / Accepted: 7 October 2021 / Published: 12 October 2021

Abstract

:
To monitor the Earth’s surface, the satellite of the NASA Landsat program provides us image sequences of any region on the Earth constantly over time. These image sequences give us a unique resource to study the Earth’s surface, changes of the Earth resource over time, and their implications in agriculture, geology, forestry, and more. Besides natural sciences, image sequences are also commonly used in functional magnetic resonance imaging (fMRI) of medical studies for understanding the functioning of brains and other organs. In practice, observed images almost always contain noise and other contaminations. For a reliable subsequent image analysis, it is important to remove such contaminations in advance. This paper focuses on image sequence denoising, which has not been well-discussed in the literature yet. To this end, an edge-preserving image denoising procedure is suggested. The suggested method is based on a jump-preserving local smoothing procedure, in which the bandwidths are chosen such that the possible spatio-temporal correlations in the observed image intensities are accommodated properly. Both theoretical arguments and numerical studies show that this method works well in the various cases considered.

1. Introduction

The Landsat project, led by the US Geological Survey (USGS) and NASA, has launched eight satellites since 1972 to continuously provide scientifically valuable images of the Earth’s surface. These images can be freely accessed by researchers around the world (cf., Zanter [1]). This rich archive of Landsat images has become a major resource for scientific research about the Earth’s surface and its resources in different scientific disciplines, including forest science, climate science, agriculture, ecology, fire science, and many more. As an example, Figure 1 shows two images of the Las Vegas area in Nevada taken in 1984 and 2007, respectively. These two images clearly show the increasing urban sprawl of Las Vegas during the 23-year period, and consequently, the environment in that region has changed dramatically. The current satellite (i.e., the Landsat 8) can deliver an image of a given region roughly every 16 days. So, we have a sequence of images of that region collected sequentially over time, stored in the Landsat database, which is increasing all the time. Image sequences are commonly used in many other applications, including functional magnetic resonance imaging (fMRI) in neuroscience and quality control in manufacturing industries (Qiu [2]). In practice, observed images usually contain noise and other contaminations (Gonzalez and Woods [3]). For reliable subsequent image analyses, such contaminations should be removed in advance. In the image processing literature, the removal of noise from an observed image is referred to as image denoising. This paper focuses on image denoising for analyzing observed image sequences.
In the literature, there has been extensive discussion on image denoising (Qiu [4]). Many early methods in the computer science literature are based on the Markov random field (MRF) framework, in which observed image intensities of an image are assumed to have the Markov property that the observed intensity at a given pixel depends only on the observed intensities in a neighborhood of the given pixel (Geman and Geman [5]). Then, if the true image is assumed to have a prior distribution which is also an MRF, its posterior distribution would be an MRF too, and consequently, the true image can be estimated by the maximum a posteriori (MAP) estimator (e.g., Geman and Geman [5], Besag [6], Fessler et al. [7]). Other popular image denoising methods include those based on diffusion equations (e.g., Perona and Malik [8], Weickert [9]), total variation (Beck and Teboulle [10], Rudin et al. [11], Yuan et al. [12]), wavelet transformations (e.g., Chang et al. [13], Mrázek [14]), jump regression analysis (e.g., Gijbels et al. [15], Qiu [16], Qiu [17], Qiu and Mukherjee [18]), adaptive weights smoothing (e.g., Polzehl and Spokoiny [19]), spatial adaption (e.g., Kervrann and Boulanger [20]) and more. Besides noise removal, edge-preserving is important for image denoising because edges are important structures of the images. Some of the methods mentioned above can preserve edges well, such as the ones based on jump regression analysis, total variation, and wavelet transformations. Thorough surveys of popular edge-preserving image denoising methods can be found in Jain and Tyagi [21] and Qiu [4].
Although there are already some existing methods for edge-preserveing image denoising, almost all of them handle observed images taken at a single time point. So far, we have not found much discussion about denoising image sequences, which is the focus of the current paper. A given image sequence often describes a gradual change in appearance over time, subject to the underlying process. For instance, the sequence of images of the Las Vegas area acquired by the Landsat satellite (cf., Figure 1) describes the gradual change of the Earth’s surface in that area over time. As mentioned above, two consecutive images in the sequence acquired by the current Landsat satellite are only about 16 days apart. So, their difference should be very small. However, the images could be substantially different after a long period of time, as shown in Figure 1. In such applications, it should be reasonable to assume that edge locations in different images either do not change or change gradually over time. To handle such image sequences, the neighboring images should be useful when denoising the image at a given time point, or information in neighboring images should be shared during image denoising. By noticing such features of image sequences, we propose an edge-preserving image denoising procedure for analyzing image sequences in this paper. Our proposed method is based on the jump regression analysis (JRA) used for regression modeling when the underlying regression function has jumps or other singularities (Qiu [22]). It is a local smoothing procedure, and the possible spatio-temporal correlation in the observed image data has been accommodated properly in its construction. Both theoretical arguments and numerical studies show that this method works well in various different cases.
The remaining parts of the article are organized as follows. The proposed method is described in detail in Section 2. Its statistical properties and the numerical studies about its performance in different finite-sample cases are presented in Section 3. Several concluding remarks are provided in Section 4. Some technical details are given in Appendix A.

2. Materials and Methods

This section describes our proposed method in two parts. A JRA model for describing an image sequence and the model estimation are discussed in Section 2.1. Selection of several parameters used in model estimation is discussed in Section 2.2.

2.1. JRA Model and Its Estimation

To describe an image sequence, let us consider the following JRA model:
Z i j k = f ( x i , y j ; t k ) + ε i j k , i = 1 , 2 , , n x , j = 1 , 2 , , n y , k = 1 , 2 , , n t ,
where Z i j k is the observed image intensity level at the ( i , j ) -th pixel ( x i , y j ) and at the k-th time point t k , f ( x i , y j ; t k ) is the true image intensity level, and ε i j k is the pointwise random noise with mean 0 and variance σ 2 . In model (1), spatio-temporal data correlation is allowed, namely, { ε i j k } could be correlated over i , j and k. For image data, the pixel locations are usually regularly spaced. Without loss of generality, it is assumed that they are equally spaced in the design space Ω = [ 0 , 1 ] × [ 0 , 1 ] , namely, ( x i , y j ) = ( i / n x , j / n y ) , for all i and j, where n x and n y are the numbers of rows and columns, respectively. The observation times { t k , k = 1 , 2 , , n t } are also assumed to be equally spaced in the time interval [ 0 , 1 ] . The true image intensity function f ( x , y ; t ) , for ( x , y ) Ω , is continuous in the design space Ω at each t [ 0 , 1 ] , except on the edges where it has jumps.
To estimate the unknown image intensity function f ( x , y ; t ) in model (1), we consider using a local smoothing method, instead of a global smoothing method (e.g., smoothing spline method), because of a large amount of data involved in the current problem. Likewise, it has been well-discussed in the JRA literature that conventional smoothing methods (e.g., conventional local kernel smoothing methods) would not work well for estimating models like (1) where the true image intensity function f ( x , y ; t ) has jumps at the edges, because the jumps would be blurred by such conventional methods (cf., Qiu [22]). In this paper, we suggest a jump-preserving local smoothing method for estimating (1), described in detail below. For a given point ( x , y ; t ) Ω × [ 0 , 1 ] , define a local neighborhood
O ( x , y ; t ) = { x , y ; t : x , y ; t Ω × [ 0 , 1 ] , ( x x ) 2 h x 2 + ( y y ) 2 h y 2 1 , | t t | / h t 1 } ,
where h x , h y and h t are the bandwidths in the x , y , and t axis, respectively. In O ( x , y ; t ) , we first consider the following local linear kernel (LLK) smoothing procedure (Fan and Gijbels [23]):
min a , b , c , d i = 1 n x j = 1 n y k = 1 n t Z i j k a + b ( x i x ) + c ( y j y ) + d ( t k t ) 2 K x i x h x , y j y h y K t k t h t ,
where K ( v ) is a density kernel function with the support { v : | v | 1 } . The solutions to ( a , b , c , d ) of the minimization problem (2) are denoted as a ^ ( x , y ; t ) , b ^ ( x , y ; t ) , c ^ ( x , y ; t ) , and d ^ ( x , y ; t ) , respectively. It can be checked that they have the following expressions:
a ^ ( x , y ; t ) b ^ ( x , y ; t ) c ^ ( x , y ; t ) d ^ ( x , y ; t ) = m 000 m 100 m 010 m 001 m 100 m 200 m 110 m 101 m 010 m 110 m 020 m 011 m 001 m 101 m 011 m 002 1 i j k Z i j k K i j k i j k ( x i x ) Z i j k K i j k i j k ( y j y ) Z i j k K i j k i j k ( t k t ) Z i j k K i j k ,
where i j k denotes i = 1 n x j = 1 n y k = 1 n t , K i j k denotes K ( x i x h x , y j y h y ) K ( t k t h t ) , and m r s l = i j k ( x i x ) r ( y j y ) s ( t k t ) l K i j k , for r , s , l = 0 , 1 , 2 . The LLK estimator of f ( x , y ; t ) is defined to be a ^ ( x , y ; t ) . The estimated gradient direction of f ( x , y ; t ) at ( x , y ; t ) is G ^ ( x , y ; t ) = ( b ^ ( x , y ; t ) , c ^ ( x , y ; t ) , d ^ ( x , y ; t ) ) which indicates the direction in which the estimated plane in O ( x , y ; t ) by the LLK procedure (2) increases the fastest. If there is an edge surface in O ( x , y ; t ) , then G ^ ( x , y ; t ) would be (approximately) orthogonal to that surface.
In cases when there are no edges in the neighborhood O ( x , y ; t ) , a ^ ( x , y ; t ) would be a good estimate of f ( x , y ; t ) . Otherwise, it cannot be a good estimate because a ^ ( x , y ; t ) is a weighted average of all observed image intensities in O ( x , y ; t ) , the jumps in the image intensity surface would be smoothed out in the weighted average, and the estimate a ^ ( x , y ; t ) would be biased for estimating f ( x , y ; t ) . To overcome that limitation, we consider the following one-sided smoothing idea. Let O ( x , y ; t ) be divided into two parts O ( 1 ) ( x , y ; t ) and O ( 2 ) ( x , y ; t ) by a plane that passes ( x , y ; t ) and is perpendicular to G ^ ( x , y ; t ) . See Figure 2 for an example.
Then, in cases when there is an edge surface in O ( x , y ; t ) , that plane would be (approximately) parallel to the edge surface. Consequently, at least one of O ( 1 ) ( x , y ; t ) and O ( 2 ) ( x , y ; t ) would be (mostly) located on a single side of the edge surface in such cases. Now, let us consider the following one-sided LLK smoothing procedure: for l = 1 , 2 ,
min a , b , c , d ( x i , y j ; t k ) O ( l ) ( x , y ; t ) Z i j k a + b ( x i x ) + c ( y j y ) + d ( t k t ) 2 K x i x h x , y j y h y K t k t h t .
The solutions of (4) to ( a , b , c , d ) are denoted as ( a ^ ( l ) ( x , y ; t ) , b ^ ( l ) ( x , y ; t ) , c ^ ( l ) ( x , y ; t ) , d ^ ( l ) ( x , y ; t ) ) , for l = 1 , 2 . Intuitively, when there are no edges in O ( x , y ; t ) , a ^ ( x , y ; t ) , a ^ ( 1 ) ( x , y ; t ) and a ^ ( 2 ) ( x , y ; t ) are all consistent estimates of f ( x , y ; t ) under some regular conditions. In such cases, a ^ ( x , y ; t ) would be preferred since it averages more observations and consequently it would have a smaller variance. When there are edges in O ( x , y ; t ) , a ^ ( x , y ; t ) would not be a good estimate of f ( x , y ; t ) as explained above, but one of a ^ ( 1 ) ( x , y ; t ) and a ^ ( 2 ) ( x , y ; t ) should estimate f ( x , y ; t ) well. Therefore, in all cases, at least one of the three estimators a ^ ( x , y ; t ) , a ^ ( 1 ) ( x , y ; t ) and a ^ ( 2 ) ( x , y ; t ) should estimate f ( x , y ; t ) well.
Next, we need to choose a good estimator from a ^ ( x , y ; t ) , a ^ ( 1 ) ( x , y ; t ) and a ^ ( 2 ) ( x , y ; t ) based on the observed data, which is not straightforward, partly because we do not know in advance whether there are edges in the neighborhood O ( x , y ; t ) and whether the edges are mostly contained in O ( 1 ) ( x , y ; t ) or O ( 2 ) ( x , y ; t ) if the answer to the first question is positive. To overcome this difficulty, let us consider the following weighted residual mean squares (WRMS) of the fitted local plane by the LLK procedure (2):
e ( x , y ; t ) = { i j k [ Z i j k a ^ ( x , y ; t ) b ^ ( x , y ; t ) ( x i x ) c ^ ( x , y ; t ) ( y j y ) d ^ ( x , y ; t ) ( t k t ) ] 2 K i j k } / i j k K i j k .
The above WRMS measures how well the fitted local plane describes the observed data in O ( x , y ; t ) . If there are edges in O ( x , y ; t ) , this quantity would be relatively large, due mainly to the jumps in the image intensity surface. Otherwise, it would be relatively small. So, the quantity e ( x , y ; t ) contains useful information about the existence of edges in O ( x , y ; t ) . Similarly, we can define WRMS values for the two one-sided local planes fitted in O ( 1 ) ( x , y ; t ) and O ( 2 ) ( x , y ; t ) . They are denoted as e ( 1 ) ( x , y ; t ) and e ( 2 ) ( x , y ; t ) . Based on these WRMS values, we define our edge-preserving estimator of f ( x , y ; t ) to be
f ^ ( x , y ; t ) = a ^ ( x , y ; t ) I ( D ( x , y ; t ) u ) + a ^ ( 1 ) ( x , y ; t ) I ( D ( x , y ; t ) > u ) I ( e ( 1 ) ( x , y ; t ) < e ( 2 ) ( x , y ; t ) ) + a ^ ( 2 ) ( x , y ; t ) I ( D ( x , y ; t ) > u ) I ( e ( 1 ) ( x , y ; t ) > e ( 2 ) ( x , y ; t ) ) + a ^ ( 1 ) ( x , y ; t ) + a ^ ( 2 ) ( x , y ; t ) 2 I ( D ( x , y ; t ) > u ) I ( e ( 1 ) ( x , y ; t ) = e ( 2 ) ( x , y ; t ) ) ,
where D ( x , y ; t ) = max ( e ( x , y ; t ) e ( 1 ) ( x , y ; t ) , e ( x , y ; t ) e ( 2 ) ( x , y ; t ) ) , I ( · ) is the indicator function, and u > 0 is a threshold parameter. By (6), it is obvious that f ^ ( x , y ; t ) is defined to be one of a ^ ( x , y ; t ) , a ^ ( 1 ) ( x , y ; t ) and a ^ ( 2 ) ( x , y ; t ) . The quantity a ^ ( x , y ; t ) , which is obtained from the entire neighborhood O ( x , y ; t ) , is chosen if the observed data indicate no edges in O ( x , y ; t ) , supported by the event D ( x , y ; t ) u . Otherwise, one of the two one-sided quantities, a ^ ( 1 ) ( x , y ; t ) and a ^ ( 2 ) ( x , y ; t ) , with a smaller WRMS value is chosen. Although, theoretically, the event ( e ( 1 ) ( x , y ; t ) = e ( 2 ) ( x , y ; t ) ) would have probability zero of happening, the last term on the right-hand-side of (6) is still included for completeness of the definition of f ^ ( x , y ; t ) and for the consideration that e ( 1 ) ( x , y ; t ) and e ( 2 ) ( x , y ; t ) could be considered the same in certain algorithms when their values are close.

2.2. Parameter Selection

In our proposed method described in Section 2.1, there are four parameters; h x , h y , h t and u, that need to be chosen properly in advance. For that purpose, it is natural to consider the cross validation (CV) procedure, especially in the current research problem where the observed data are quite large in size. However, it has been well-demonstrated in the literature that the conventional CV procedure would not work well in cases when the observed data are autocorrelated, because it cannot effectively distinguish the data correlation structure from the mean structure (cf., Altman [24], Opsomer et al. [25]). In the current problem, spatio-temperal data correlation is possible in almost all applications. Thus, the conventional CV procedure is not feasible in such cases. In the univariate regression setup, Brabanter et al. [26] suggested a modified CV procedure for choosing smoothing parameters in cases with correlated data. This procedure is generalized here for choosing the parameters h x , h y , h t and u used in the proposed method, which is described below. Let the modified CV score for choosing h x , h y , h t and u be defined as
C V ( h x , h y , h t , u ) = 1 n x n y n t i j k f ^ ( i j k ) ( x i , y j ; t k ) Z ( x i , y j ; t k ) 2 ,
where f ^ ( i j k ) ( x i , y j ; t k ) is the leave-one-out estimate of f ( x i , y j ; t k ) by (2)–(6) after the observation Z i j k is removed from the estimation process and after the kernel function is replaced by the so-called ϵ -optimal bimodal kernel function K ϵ ( v ) defined to be
K ϵ ( v ) = 4 4 3 ϵ ϵ 3 × 3 4 ( 1 v 2 ) I ( | v | 1 ) , i f | v | ϵ , 3 ( 1 ϵ 2 ) 4 ϵ | v | , i f | v | < ϵ ,
where 0 < ϵ < 1 is a parameter. Based on a large simulation study, Brabanter et al. [26] suggested choosing ϵ to be 0.1, which is adopted in this paper. Then, by the above modified CV procedure, (7) and (8), the parameters h x , h y , h t and u can be chosen by minimizing the modified CV score C V ( h x , h y , h t , u ) .

3. Results

3.1. Statistical Properties

In this part, we discuss some statistical properties of the proposed edge-preserving image sequence denoising method (2)–(6). First, we have the following proposition.
Proposition 1.
Assume that i) the kernel function K ( v ) used in (2) is a Lipschitz-1 continuous density function, and ii) the noise terms { ε i j k , i = 1 , 2 , , n x , j = 1 , 2 , , n y , k = 1 , 2 , , n t } in model (1) form a strong mixing stochastic process with the following strong mixing coefficients:
α ( d ) = sup ( i j k ) , ( i j k ) sup A , B { | P ( A B ) P ( A ) P ( B ) | , A σ ( ε i j k ) , B σ ( ε i j k ) , max { | i i | , | j j | , | k k | } > d } ,
which have the property that α ( d ) c 1 σ 2 ρ c 2 d , where c 1 , c 2 > 0 and 0 < ρ < 1 are constants, and iii) E ( ε 111 6 ) < . Let N = n x n y n t , H = h x h y h t , n m i n = min ( n x , n y , n t ) , and h m i n = min ( h x , h y , h t ) . Then, for any ( x , y ; t ) Ω h = [ h x , 1 h x ] × [ h y , 1 h y ] × [ h t , 1 h t ] , we have
| 1 N H i j k K x i x h x , y i y h y K t i t h t 1 | = O 1 n m i n h m i n ,
E | 1 N H i j k ε i j k K x i x h x , y i y h y K t i t h t | 2 = O 1 N H ,
E | 1 N H i j k ( ε i j k 2 σ 2 ) K x i x h x , y i y h y K t i t h t | 2 = O 1 N H .
Based on the results in Proposition 1, we can derive the following properties of the LLK estimates defined in (3).
Theorem 1.
Besides the conditions in Proposition 1, we further assume that the true image intensity function f ( x , y ; t ) has continuous first-order partial derivatives with respect to x, y and t in the design space Ω except at the edge curves. Then, for any ( x , y ; t ) Ω h J h , we have
a ^ ( x , y ; t ) b ^ ( x , y ; t ) c ^ ( x , y ; t ) d ^ ( x , y ; t ) = f ( x , y ; t ) f x ( x , y ; t ) f y ( x , y ; t ) f t ( x , y ; t ) + O ( h x 2 + h y 2 + h t 2 ) O ( h x 2 + h y 2 + h t 2 h x ) O ( h x 2 + h y 2 + h t 2 h y ) O ( h x 2 + h y 2 + h t 2 h t ) + O p ( 1 N H ) O p ( 1 h x N H ) O p ( 1 h y N H ) O p ( 1 h t N H ) .
for any ( x , y , t ) J h S h , we have
a ^ ( x , y ; t ) b ^ ( x , y ; t ) c ^ ( x , y ; t ) d ^ ( x , y ; t ) = f ( x τ , y τ ; t τ ) + d τ ξ 000 ( 2 ) d τ ξ 200 h x ξ 100 ( 2 ) d τ ξ 020 h y ξ 010 ( 2 ) d τ ξ 002 h t ξ 001 ( 2 ) + O ( h x 2 + h y 2 + h t 2 ) O ( h x 2 + h y 2 + h t 2 h x ) O ( h x 2 + h y 2 + h t 2 h y ) O ( h x 2 + h y 2 + h t 2 h t ) + O p ( 1 N H ) O p ( 1 h x N H ) O p ( 1 h y N H ) O p ( 1 h t N H ) ,
where ξ r s l = Ω × [ 0 , 1 ] u r v s w l K ( u , v ) K ( w ) d u d v d w , ξ r s l ( 2 ) = Q ( 2 ) u r v s w l K ( u , v ) K ( w ) d u d v d w , for r , s , l = 0 , 1 , 2 , J is the closure of the set of all jump points of f ( x , y ; t ) , J h = { ( x , y ; t ) : ( x , y ; t ) Ω h , ( x x * ) 2 / h x 2 + ( y y * ) 2 / h y 2 1 , | t t * | / h t 1 , f o r a n y ( x * , y * , t * ) J } , S is the set of singular points in J, including the crossing points of two or more edges, points on an edge surface at which the edge surface does not have a unique tangent surface, and points in J at which the jump sizes in f ( x , y ; t ) are zero, S h = { ( x , y ; t ) : ( x , y ; t ) Ω h , ( x x * ) 2 / h x 2 + ( y y * ) 2 / h y 2 1 , | t t * | / h t 1 , f o r a n y ( x * , y * , t * ) S } , ( x τ , y τ ; t τ ) J S is the projection of ( x , y ; t ) to J with the Euclidean distance between the two points being c h x 2 + h y 2 + h t 2 , for a constant 0 < c < 1 , and f ( x τ , y τ ; t τ ) is the smaller one of the two one-sided limits of f ( x , y ; t ) at ( x τ , y τ ; t τ ) . In cases when O ( x , y ; t ) contains jumps, without loss of generality, it is assumed that O ( x , y ; t ) is divided by the edge surface into two parts I 1 and I 2 with a positive jump size d τ from I 1 to I 2 at ( x τ , y τ ; t τ ) , and Q ( 1 ) and Q ( 2 ) are the two corresponding parts in the support of K ( u , v ) K ( w ) .
The next two theorems establish the consistency of the proposed edge-preserving image denoising procedure (2)–(6). First, we have the following theorem about the WRMS values defined in (5).
Theorem 2.
Assume that the conditions in Theorem 1 are satisfied, h x 2 + h y 2 + h t 2 = o ( 1 ) , ( h x 2 + h y 2 + h t 2 ) / h m i n = o ( 1 ) , 1 / ( N H ) = o ( 1 ) and 1 / ( N H h m i n 2 ) = o ( 1 ) . Then, we have the following results: for any ( x , y ; t ) Ω h J h ,
e ( x , y ; t ) = σ 2 + o p ( 1 ) , e ( l ) ( x , y ; t ) = σ 2 + o p ( 1 ) , f o r l = 1 , 2 ;
for any ( x , y ; t ) J h S h ,
e ( x , y ; t ) = σ 2 + d τ C τ 2 + o p ( 1 ) , e ( l ) ( x , y ; t ) = σ 2 + d τ C τ ( l ) 2 + o p ( 1 ) , f o r l = 1 , 2 ,
where
C τ = ( Q ( 1 ) ξ 000 ( 2 ) + ξ 100 ( 2 ) ξ 200 u + ξ 010 ( 2 ) ξ 020 v + ξ 001 ( 2 ) ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w + Q ( 2 ) 1 ξ 000 ( 2 ) ξ 100 ( 2 ) ξ 200 u ξ 010 ( 2 ) ξ 020 v ξ 001 ( 2 ) ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w ) 1 / 2 .
and
C τ ( l ) = ( 2 Q ( 1 l ) B 0 l + B 1 l ξ 200 u + B 2 l ξ 020 v + B 3 l ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w + 2 Q ( 2 l ) 1 B 0 l B 1 l ξ 200 u B 2 l ξ 020 v B 3 l ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w ) 1 / 2 .
with the quantities Q ( 1 l ) , Q ( 2 l ) , B 0 l , B 1 l , B 2 l and B 3 l defined as follows. Let g = ( d τ ξ 200 h x ξ 100 ( 2 ) , d τ ξ 020 h y ξ 010 ( 2 ) , d τ ξ 002 h t ξ 001 ( 2 ) ) . Then, from (9), g is actually the asymptotic direction of the gradient vector G ^ ( x , y ; t ) . Let O ˜ ( l ) ( x , y ; t ) , for l = 1 , 2 , be two halves of the neighborhood O ( x , y ; t ) separated by a plane passing the point ( x , y ; t ) in the direction perpendicular to g and Q ˜ ( l ) be the two corresponding parts in the support of K ( u , v ) K ( w ) . Then, Q ( 1 l ) = Q ( 1 ) Q ˜ ( l ) , Q ( 2 l ) = Q ( 2 ) Q ˜ ( l ) , B 0 l = Q ( 2 l ) K ( u , v ) K ( w ) d u d v d w , B 1 l = Q ( 2 l ) u K ( u , v ) K ( w ) d u d v d w , B 2 l = Q ( 2 l ) v K ( u , v ) K ( w ) d u d v d w , and B 3 l = Q ( 2 l ) w K ( u , v ) K ( w ) d u d v d w , for l = 1 , 2 .
Theorem 3.
Under the conditions in Theorem 2 and the extra assumption that threshold parameter u = u N 0 as N , we have, for any ( x , y ; t ) Ω h ,
f ^ ( x , y ; t ) = f ( x , y ; t ) + o p ( 1 ) .
The proofs of these theoretical results are given in Appendix A.

3.2. Numerical Studies

In this part, we study the numerical performance of our proposed method for denoising an image sequence. First, we consider a simulation example in which the true image intensity function in model (1) has the following expression:
f ( x , y ; t ) = 2 ( x 0.5 ) 2 2 ( y 0.5 ) 2 0.1 sin ( 2 π t ) + 1 , i f r ( x , y ; t ) 0 . 25 2 , 2 ( x 0.5 ) 2 2 ( y 0.5 ) 2 0.1 sin ( 2 π t ) , o t h e r w i s e ,
where r ( x , y ; t ) = ( x 0.5 ) 2 + ( y 0.5 ) 2 + 0.01 sin ( 2 π t ) , ( x , y ) Ω = [ 0 , 1 ] × [ 0 , 1 ] , and t [ 0 , 1 ] . At a given value of t, f ( x , y ; t ) has a circular edge curve r ( x , y ; t ) = 0 . 25 2 with a constant jump size 1 in f ( x , y ; t ) at the edges. The radius of the circular edge curve, 0 . 25 2 0.01 sin ( 2 π t ) , changes periodically over t [ 0 , 1 ] . The image intensity function f ( x , y ; t ) at t = 0.01 and 0.25 and its temporal profile f ( 0.25 , 0.25 ; t ) are shown in Figure 3. It can be seen that both the image intensity level at a given pixel and the edge curve change gradually when t changes in [ 0 , 1 ] .
In model (1), the random errors { ε i j k , i = 1 , 2 , , n x , j = 1 , 2 , , n y , k = 1 , 2 , , n t } are generated by the function spatialnoise() in the R-package neuRosim (cf., Welvaert et al. [27]). In that R function, there are two parameters ρ and σ to specify in advance, where ρ controls the data autocorrelation in all three dimensions and σ is the common standard deviation of the random errors. In all our examples, σ is fixed at 0.1, 0.2 or 0.3, and ρ is fixed at 0.1, 0.3 or 0.5, to study the possible impact of data noise level and data correlation on the performance of the proposed method. Without loss of generality, we set n x = n y in all examples. In the model estimation procedure (2)–(6), we set h x = h y , and the kernel function K ( v ) is chosen to be the following truncated Gaussian density function:
K ( v ) = exp ( v 2 / 2 ) exp ( 0.5 ) 2 π 3 π exp ( 0.5 ) , i f | v | 1 , 0 , o t h e r w i s e .
In cases when σ = 0.1 , 0.2 or 0.3, n x = 64 or 128, n t = 50 or 100, ρ = 0.1 , 0.3 or 0.5, the MSE values of the estimator f ^ ( x , y ; t ) defined in (6) are presented in Table 1, along with the corresponding parameters h x , h t and u selected by the modified CV procedure (7) and (8). In each case considered, the MSE value is computed based on 10 replicated simulations. For comparison purposes, the optimal MSE value of the estimator f ^ ( x , y ; t ) , when its parameters ( h x , h t and u) are chosen such that the MSE value reaches the minimum in each case considered, is also presented in the table, along with the corresponding parameter values. From the table, we can draw the following conclusions. (i) The MSE values are smaller when either n x or n t is larger, which confirms the consistency results discussed in Section 3.1. (ii) When ρ is larger (i.e., the spatio-temporal data correlation is stronger), the MSE values are larger. So, data correlation does have an impact on the performance of the proposed method, which is intuitively reasonable. (iii) By comparing the MSE and the optimal MSE values, we can see that the MSE values are usually larger than their optimal values, but their differences are not that big in almost all cases considered. This conclusion indicates that the modified CV procedure (7) and (8) for determining the values of the parameters ( h x , h t , u ) is quite effective. (iv) The parameter values chosen by the modified CV procedure (7) and (8) are quite close to the optimal parameter values in most cases considered.
Next, we compare our proposed method, denoted as NEW, with some alternative methods described below. The first alternative method is the conventional LLK procedure (2), by which f ( x , y ; t ) is estimated by a ^ ( x , y ; t ) defined in (3). Its bandwidths are chosen by the conventional CV procedure, without considering any possible spatio-temporal data correlation. As explained in Section 2.1, this estimator would blur edges while removing noise. The second alternative method is to use a ^ ( x , y ; t ) for estimating f ( x , y ; t ) , but its bandwidths are chosen by the modified CV procedure (7) and (8). The above two alternative methods are denoted as LLK-C and LLK, respectively, where LLK-C denotes the first conventional LLK procedure that does not accommodate data correlation. The third alternative method is the one by Gijbels et al. [15] which is used for edge-preserving image denoising of a single image. To apply this method to the current problem, individual images collected at different time points can be denoised by it separately. This method assumes that the observed image intensities at different pixels are independent of each other, and thus their bandwidths can be chosen by the conventional CV procedure. This method is denoted as GLQ. The fourth alternative method is to use f ^ ( x , y ; t ) in (6) to estimate f ( x , y ; t ) , but the parameters ( h x , h t , u ) are chosen by the conventional CV procedure. This method is denoted as NEW-C. By considering all these four alternative methods (i.e., LLK-C, LLK, GLQ and NEW-C), we can check whether the current problem to denoise an image sequence can be handled properly by the conventional LLK procedure with or without using the modified CV procedure, by an existing edge-preserving image denoising method designed for denoising a single image, or by the proposed method without considering the possible spatio-temporal data correlation. To evaluate their performance, in addition to the regular MSE criterion, we also consider the following edge-preservation (EP) criterion originally discussed in Hall and Qiu [28]:
E P ( f ^ ) = | J S ( f ^ ) J S ( f ) | / J S ( f ) ,
where
J S ( f ) = 1 ( n x 2 ) ( n y 2 ) ( n t 2 ) i = 2 n x 1 j = 2 n y 1 k = 2 n t 1 ( [ f ( x i + 1 , y j ; t k ) f ( x i 1 , y j ; t k ) ] 2 + [ f ( x i , y j + 1 ; t k ) f ( x i , y j 1 ; t k ) ] 2 + [ f ( x i , y j ; t k + 1 ) f ( x i , y j ; t k 1 ) ] 2 ) 1 / 2 ,
and JS( f ^ ) is defined similarly. According to Hall and Qiu [28], JS(f) is a reasonable measure of the cumulative jump magnitude of f at the edge locations. So, E P ( f ^ ) provides a measure of the percentage of the cumulative jump magnitude of f that has been lost during data smoothing by using the estimator f ^ . By this explanation, the smaller its value, the better. In cases when σ = 0.1 , 0.2 or 0.3, n x = 128 , n t = 100 , and ρ = 0.1 , 0.3 or 0.5, the MSE and EP values of the related methods are presented in Table 2. From the table, it can be seen that the proposed method NEW has the smallest MSE values with quite large margins among all five methods in all cases considered, except the case when σ = 0.1 and ρ = 0.1 where NEW-C has a lightly smaller MSE value than that of NEW due to the weak data correlation in that case. Likewise, NEW has much smaller EP values in all cases considered, compared to the four competing methods. This example confirms that it is necessary to consider edge-preserving procedures when denoising image sequences and the possible spatio-temporal data correlation should be taken into account during the denoising process. It also confirms the benefit to share useful information among neighboring images when denoising an image sequence.
In the cases when σ = 0.2 and ρ = 0.1 , 0.3 or 0.5, Figure 4 shows the observed images at t = 0.5 in the first column, and the denoised images by the methods LLK-C, LLK, GLQ, NEW-C and NEW in columns 2–6. From the figure, it can be seen that the denoised images by NEW are the best in removing noise and preserving edges. As a comparison, the denoised images by LLK-C, and NEW-C are quite noisy because their selected bandwidths by the conventional CV procedure are relatively small due to the fact the conventional CV procedure cannot distinguish the data correlation from the mean structure, as discussed in Section 2.2. The denoised images by LLK are quite blurry because the method does not take the edges into account when denoising the images. The denoised images by GLQ are quite blurry as well since GLQ denoises individual images at different time points separately and the serial data correlation is ignored in this method.
Next, we apply the proposed method NEW and the four alternative methods LLK-C, LLK, GLQ and NEW-C to a sequence of cell images that records the vasculogenesis process. The sequence has 100 images, and each image has 128 × 128 pixels. A detailed description of the data can be found in Svoboda et al. [29]. The 1st, 50th and 100th images of the sequence are shown in Figure 5.
In the image denoising literature, to test the noise removal ability of a image denoising method, it is a common practice to add random noise at a certain level to the test images and then apply the image denoising method to the noisy test images (cf., Gijbels et al. [15]). To follow this convention, spatio-temporally correlated noise is first generated using the R-package neuRosim and then added to the sequence of 100 cell images described above. When generating the noise, σ is chosen to be 0.1, 0.2 or 0.3 and ρ is chosen to be 0.1, 0.3 or 0.5, as in the simulation examples presented above. The MSE and EP values of the five image denoising methods based on 10 replicated simulations are presented in Table 3. From the table, it can be seen that NEW still has smaller MSE and EP values in this example, compared to the four competing methods, except in a small number of cases when σ and ρ are relatively small.
The 50th observed test image after the spatio-temporally correlated noise with ρ = 0.1 , 0.3 or 0.5 being added is shown in the first column of Figure 6. The denoised images by the five methods LLK-C, LLK, GLQ, NEW-C and NEW are shown in columns 2–6 of the figure. It can be seen that similar conclusions to those from Figure 4 can be made here, and the denoised images by NEW look reasonably well, as the algorithm work well in removing noise and preserving edges.
Finally, we apply the five methods considered in the above examples to a sequence of Landsat images of the Salton Sea region. The Salton Sea is the largest inland lake located at the southern border of California, US, and has a great impact on the local ecosystem (Shuford et al. [30]). The Landsat images used here were taken during the time period of 27 May 2000 and 24 December 2001. There are a total of 20 images collected at roughly equally-spaced time points, and each image has 100 × 100 pixels. In this example, we consider the case when σ = 0.3 and ρ = 0.3 . The MSE values of the five methods LLK-C, LLK, GLQ, NEW-C, and NEW calculated in the same way as before are 9.70 , 4.78 , 12.03 , 9.77 , and 4.82 , respectively. Their EP values are respectively 85.54 % , 20.18 % , 109.91 % , 86.15 % , and 19.14 % . So, we can see that NEW method has the best edge-preserving performance among the five methods in this example, and NEW and LLK have the best overall noise removal performance. The 10th noisy observed test image taken on 28 April 2001 and its denoised versions by the five methods are shown in Figure 7. It can be seen from the figure that the denoised images by the methods LLK-C, GLQ, and NEW-C are still quite noisy, and the noise in the images generated by NEW and LLK is mostly removed while the edges are preserved reasonably well.

4. Conclusions

In this paper, we have described our proposed edge-preserving image denoising method for handling image sequences. Some major features of the proposed method include (i) helpful information in neighboring images is shared during image denoising, (ii) edge structures in the observed images can be preserved when removing noise, and (iii) possible sptio-temporal data correlation can be accommodated in the related local smoothing procedure. Theoretical arguments given in Section 3.1 and numerical studies presented in Section 3.2 show that the proposed method works well in various cases considered. There are still some issues about the proposed method for future research. For instance, in the proposed local smoothing procedure (2)–(6), each of the bandwidths ( h x , h y , h t ) is chosen by the modified CV procedure (7) and (8) to be the same in the entire design space Ω × [ 0 , 1 ] . Intuitively, relatively small bandwidths are preferred at places where the image intensity surface f ( x , y ; t ) has large curvature and relatively large bandwidths are preferred at places where the curvature of f ( x , y ; t ) is small. Thus, in some applications where the curvature of f ( x , y ; t ) could change quite dramatically in the design space, variable bandwidths might be helpful. Such issues will be studied carefully in our future research.

Author Contributions

Methodology, P.Q.; Formal analysis, F.Y.; Writing—original draft preparation, F.Y.; Writing—review and editing, P.Q.; Funding acquisition, P.Q.; Supervision, P.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Foundation grant DMS-1914639.

Data Availability Statement

Publicly available datasets were analyzed in this study. They can be found from the links: https://cbia.fi.muni.cz/datasets/ and https://earthexplorer.usgs.gov.

Acknowledgments

We thank the four referees for many constructive comments and suggestions about the paper which greatly improved its quality. This research is supported in part by the National Science Foundation grant DMS-1914639.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. Proof of Proposition 1

Define B h ( x , y , t ) = { ( x , y ; t ) : ( | x x | / h x ) 2 + ( | y y | / h y ) 2 1 , | t t | h t , ( x , y ; t ) [ 0 , 1 ] × [ 0 , 1 ] × [ 0 , 1 ] } , Δ i j k = [ x i 1 , x i ] × [ y j 1 , y j ] × [ t k 1 , t k ] , x 0 = y 0 = t 0 = 0 . Then it can be seen that
| 1 N H i j k K x i x h x , y i y h y K t k t h t 1 | = | 1 H i j k Δ i j k K x i x h x , y i y h y K t k t h t d u d v d w 1 | = | 1 H i j k Δ i j k K x i x h x , y i y h y K t k t h t d u d v d w 1 H B h ( x , y , t ) K u x h x , v y h y K w t h t d u d v d w | = | 1 H i j k Δ i j k K x i x h x , y i y h y K t k t h t d u d v d w 1 H i j k B h ( x , y , t ) Δ i j k K u x h x , v y h y K w t h t d u d v d w | = | 1 H i j k B h ( x , y , t ) c Δ i j k K x i x h x , y i y h y K t k t h t d u d v d w + 1 H i j k B h ( x , y , t ) Δ i j k K x i x h x , y i y h y K t k t h t d u d v d w 1 H i j k B h ( x , y , t ) Δ i j k K u x h x , v y h y K w t h t d u d v d w | O ( 1 n m i n h m i n ) + 1 H i j k B h ( x , y , t ) Δ i j k | K x i x h x , y j y h y K t k t h t K u x h x , v y h y K w t h t | d u d v d w O ( 1 n m i n h m i n ) + 1 H i j k B h ( x , y , t ) Δ i j k ( 1 + 2 ) C n m i n h m i n d u d v d w = O ( 1 n m i n h m i n ) + 1 H ( 1 + 2 ) C n m i n h m i n B h ( x , y , t ) 1 d u d v d w = O ( 1 n m i n h m i n ) ,
where C 0 is the Lipschitz constant that satisfies the condition | K ( u ) K ( u ) | C | u u | . So, the first result in Proposition 1 is valid.
To prove the second result, it can be checked that
E | 1 N H i j k ε i j k K x i x h x , y i y h y K t i x h t | 2 = V a r ( 1 N H i j k ε i j k K x i x h x , y i y h y K t k x h t ) = 1 N 2 H 2 i j k i j k K x i x h x , y i y h y K t k x h t K x i x h x , y i y h y K t k x h t C o v ( ε i j k , ε i j k ) 1 N 2 H 2 i j k i j k K x i x h x , y i y h y K t k x h t K x i x h x , y i y h y K t k x h t c 1 σ 2 ρ c 2 max { | i i | , | j j | , | k k | } 1 N 2 H 2 i j k K x i x h x , y i y h y K t k x h t c 1 σ 2 24 0 τ 2 ρ τ d τ = O ( 1 N H ) .
Similarly, it can be checked that
E | 1 N H i j k ( ε i j k 2 σ 2 ) K x i x h x , y i y h y K t i x h t | 2 = V a r ( 1 N H i j k ε i j k 2 K x i x h x , y i y h y K t k x h t ) = 1 N 2 H 2 i j k i j k K x i x h x , y i y h y K t k x h t K x i x h x , y i y h y K t k x h t C o v ( ε i j k 2 , ε i j k 2 ) 1 N 2 H 2 i j k i j k K x i x h x , y i y h y K t k x h t K x i x h x , y i y h y K t k x h t 12 ( c 1 σ 2 ρ c 2 max { | i i | , | j j | , | k k | } ) 1 / 4 E ( ε 111 4 ) 1 N 2 H 2 i j k K x i x h x , y i y h y K t k x h t 12 ( c 1 σ 2 24 0 τ 2 ρ τ d τ ) 1 / 3 ( E ( ε 111 6 ) ) 2 / 3 = O ( 1 N H ) .
The first inequality in the above expression is based on the result in Davydov [31]. So, the third result is valid.

Appendix A.2. Proof of Theorem 1

We first consider the case when ( x , y ; t ) Ω h J h . By Taylor expansion, we have
Z i j k = f ( x i , y j ; t k ) + ϵ i j k = f ( x , y ; t ) + ( x i x ) f x ( x , y ; t ) + ( y j y ) f y ( x , y ; t ) + ( t k t ) f t ( x , y ; t ) + O ( h x 2 + h y 2 + h t 2 ) + ϵ i j k .
So, it can be checked that
i j k Z i j k K i j k i j k ( x i x ) Z i j k K i j k i j k ( y j y ) Z i j k K i j k i j k ( t k t ) Z i j k K i j k = M f ( x , y ; t ) f x ( x , y ; t ) f y ( x , y ; t ) f t ( x , y ; t ) + i j k O ( h x 2 + h y 2 + h t 2 ) K i j k i j k ( x i x ) O ( h x 2 + h y 2 + h t 2 ) K i j k i j k ( y j y ) O ( h x 2 + h y 2 + h t 2 ) K i j k i j k ( t k t ) O ( h x 2 + h y 2 + h t 2 ) K i j k + i j k ϵ i j k K i j k i j k ( x i x ) ϵ i j k K i j k i j k ( y j y ) ϵ i j k K i j k i j k ( t k t ) ϵ i j k K i j k ,
where
M = m 000 m 100 m 010 m 001 m 100 m 200 m 110 m 101 m 010 m 110 m 020 m 011 m 001 m 101 m 011 m 002 .
From Expression (3), we have
a ^ ( x , y ; t ) b ^ ( x , y ; t ) c ^ ( x , y ; t ) d ^ ( x , y ; t ) = f ( x , y ; t ) f x ( x , y ; t ) f y ( x , y ; t ) f t ( x , y ; t ) + M 1 i j k O ( h x 2 + h y 2 + h t 2 ) K i j k i j k ( x i x ) O ( h x 2 + h y 2 + h t 2 ) K i j k i j k ( y j y ) O ( h x 2 + h y 2 + h t 2 ) K i j k i j k ( t k t ) O ( h x 2 + h y 2 + h t 2 ) K i j k + M 1 i j k ϵ i j k K i j k i j k ( x i x ) ϵ i j k K i j k i j k ( y j y ) ϵ i j k K i j k i j k ( t k t ) ϵ i j k K i j k .
By some simple algebraic manipulations, we have
M 1 = O ( 1 N H ) O ( 1 N H · h x ) O ( 1 N H · h y ) O ( 1 N H · h t ) O ( 1 N H · h x ) O ( 1 N H · h x 2 ) O ( 1 N H · h x h y ) O ( 1 N H · h x h t ) O ( 1 N H · h y ) O ( 1 N H · h x h y ) O ( 1 N H · h y 2 ) O ( 1 N H · h y h t ) O ( 1 N H · h t ) O ( 1 N H · h x h t ) O ( 1 N H · h y h t ) O ( 1 N H · h t 2 ) .
Then,
a ^ ( x , y ; t ) b ^ ( x , y ; t ) c ^ ( x , y ; t ) d ^ ( x , y ; t ) = f ( x , y ; t ) f x ( x , y ; t ) f y ( x , y ; t ) f t ( x , y ; t ) + O ( h x 2 + h y 2 + h t 2 ) O ( h x 2 + h y 2 + h t 2 h x ) O ( h x 2 + h y 2 + h t 2 h y ) O ( h x 2 + h y 2 + h t 2 h t ) + O p ( 1 N H ) O p ( 1 h x N H ) O p ( 1 h y N H ) O p ( 1 h t N H ) .
Now, we consider the case when ( x , y ; t ) J h S h . If ( x i , y j ; t k ) I 1 , then we have
Z i j k = f ( x i , y j ; t k ) + ε i j k = f ( x τ , y τ ; t τ ) + O ( h x 2 + h y 2 + h t 2 ) + ε i j k ,
and if ( x i , y j ; t k ) I 2 , we have
Z i j k = f ( x i , y j ; t k ) + ε i j k = f ( x τ , y τ ; t τ ) + d τ + O ( h x 2 + h y 2 + h t 2 ) + ε i j k .
By some similar arguments to those in the case considered above, we have
a ^ ( x , y ; t ) b ^ ( x , y ; t ) c ^ ( x , y ; t ) d ^ ( x , y ; t ) = f ( x τ , y τ ; t τ ) + d τ ( x i , y j ; t k ) I 2 K i j k i j k K i j k d τ h x ( x i , y j ; t k ) I 2 [ ( x i x ) / h x ] K i j k i j k [ ( x i x ) / h x ] 2 K i j k d τ h y ( x i , y j ; t k ) I 2 [ ( y j y ) / h y ] K i j k i j k [ ( y j y ) / h y ] 2 K i j k d τ h t ( x i , y j ; t k ) I 2 [ ( t k t ) / h t ] K i j k i j k [ ( t k t ) / h t ] 2 K i j k + O ( h x 2 + h y 2 + h t 2 ) O ( h x 2 + h y 2 + h t 2 h x ) O ( h x 2 + h y 2 + h t 2 h y ) O ( h x 2 + h y 2 + h t 2 h t ) + O p ( 1 N H ) O p ( 1 h x N H ) O p ( 1 h y N H ) O p ( 1 h t N H ) = f ( x τ , y τ ; t τ ) + d τ ξ 000 ( 2 ) d τ ξ 200 h x ξ 100 ( 2 ) d τ ξ 020 h y ξ 010 ( 2 ) d τ ξ 002 h t ξ 001 ( 2 ) + O ( h x 2 + h y 2 + h t 2 ) O ( h x 2 + h y 2 + h t 2 h x ) O ( h x 2 + h y 2 + h t 2 h y ) O ( h x 2 + h y 2 + h t 2 h t ) + O p ( 1 N H ) O p ( 1 h x N H ) O p ( 1 h y N H ) O p ( 1 h t N H )

Appendix A.3. Proof of Theorem 2

We prove the second equations in (10) and (11) here. The first equations can be proved similarly. For simplicity, we write a ^ ( l ) ( x , y ; t ) , b ^ ( l ) ( x , y ; t ) , c ^ ( l ) ( x , y ; t ) , d ^ ( l ) ( x , y ; t ) , O ( l ) ( x , y ; t ) and O ˜ ( l ) ( x , y ; t ) as a ^ ( l ) , b ^ ( l ) , c ^ ( l ) , d ^ ( l ) , O ( l ) and O ˜ ( l ) , respectively from now on. First, by Proposition 1, it is easy to show that
i j k ε i j k K x i x h x , y i y h y K t i x h t i j k K x i x h x , y i y h y K t i x h t = O p ( 1 N H ) ,
i j k ( ε i j k 2 σ 2 ) K x i x h x , y i y h y K t i x h t i j k K x i x h x , y i y h y K t i x h t = o p ( 1 ) .
Let us first consider the case when ( x , y ; t ) Ω h J h . In such a case, it can be checked that
e ( l ) ( x , y ; t ) = { ( x i , y j ; t k ) O ( l ) [ ε i j k + f ( x i , y j ; t k ) a ^ ( l ) b ^ ( l ) ( x i x ) c ^ ( l ) ( y j y ) d ^ ( l ) ( t k t ) ] 2 K i j k } / ( x i , y j ; t k ) O ( l ) K i j k = ( x i , y j ; t k ) O ( l ) ε i j k 2 K i j k / ( x i , y j ; t k ) O ( l ) K i j k + { 2 ( x i , y j ; t k ) O ( l ) ε i j k [ f ( x i , y j ; t k ) a ^ ( l ) b ^ ( l ) ( x i x ) c ^ ( l ) ( y j y ) d ^ ( l ) ( t k t ) ] K i j k } / ( x i , y j ; t k ) O ( l ) K i j k + { ( x i , y j ; t k ) O ( l ) [ f ( x i , y j ; t k ) a ^ ( l ) b ^ ( l ) ( x i x ) c ^ ( l ) ( y j y ) d ^ ( l ) ( t k t ) ] 2 K i j k } / ( x i , y j ; t k ) O ( l ) K i j k = : A 1 ( l ) ( x , y ; t ) + A 2 ( l ) ( x , y ; t ) + A 3 ( l ) ( x , y ; t ) .
Similar to (A2), we have
A 1 ( l ) ( x , y ; t ) = σ 2 + o p ( 1 ) .
Taylor expansion of f ( x i , y j ; t k ) at point ( x , y ; t ) , results in Theorem 1, and by similar arguments for (A1), we have
A 2 ( l ) ( x , y ; t ) 2 | f ( x , y ; t ) a ^ ( l ) | | ( x i , y j ; t k ) O ( l ) ε i j k K i j k ( x i , y j ; t k ) O ( l ) K i j k | + 2 h x | f x ( x , y ; t ) b ^ ( l ) | | ( x i , y j ; t k ) O ( l ) ε i j k x i x h x K i j k ( x i , y j ; t k ) O ( l ) K i j k | + 2 h y | f y ( x , y ; t ) c ^ ( l ) | | ( x i , y j ; t k ) O ( l ) ( x , y ; t ) ε i j k y j y h y K i j k ( x i , y j ; t k ) O ( l ) K i j k | + 2 h t | f t ( x , y ; t ) d ^ ( l ) | | ( x i , y j ; t k ) O ( l ) ε i j k t k t h t K i j k ( x i , y j ; t k ) O ( l ) K i j k | = o p ( 1 ) .
Similarly, we have
A 3 ( l ) ( x , y ; t ) = o p ( 1 ) .
By combining (A3)–(A5), we have
e ( l ) ( x , y ; t ) = σ 2 + o p ( 1 ) .
Now, let us consider the case when ( x , y ; t ) J h S h . Similar to the above case, let us write
e ( l ) ( x , y ; t ) = A 1 ( l ) ( x , y ; t ) + A 2 ( l ) ( x , y ; t ) + A 3 ( l ) ( x , y ; t ) .
Here, we still have
A 1 ( l ) ( x , y ; t ) = σ 2 + o p ( 1 ) .
For A 2 ( l ) ( x , y ; t ) , we have
A 2 ( l ) ( x , y ; t ) = { 2 ( x i , y j ; t k ) I 1 O ( l ) ε i j k [ f ( x i , y j ; t k ) a ^ ( l ) b ^ ( l ) ( x i x ) c ^ ( l ) ( y j y ) d ^ ( l ) ( t k t ) ] K i j k } / ( x i , y j ; t k ) O ( l ) K i j k + { 2 ( x i , y j ; t k ) I 2 O ( l ) ε i j k [ f ( x i , y j ; t k ) a ^ ( l ) b ^ ( l ) ( x i x ) c ^ ( l ) ( y j y ) d ^ ( l ) ( t k t ) ] K i j k } / ( x i , y j ; t k ) O ( l ) K i j k = : A 21 ( l ) ( x , y ; t ) + A 22 ( l ) ( x , y ; t ) .
By the results in Theorem 1, we have
A 21 ( l ) ( x , y ; t ) = 2 ( x i , y j ; t k ) I 1 O ( l ) ε i j k f ( x i , y j ; t k ) f ( x τ , y τ ; t τ ) K i j k ( x i , y j ; t k ) O ( l ) K i j k ( D 1 + o p ( 1 ) ) ( x i , y j ; t k ) I 1 O ( l ) ε i j k K i j k ( x i , y j ; t k ) O ( l ) K i j k ( D 2 + o p ( 1 ) ) ( x i , y j ; t k ) I 1 O ( l ) ε i j k x i x h x K i j k ( x i , y j ; t k ) O ( l ) K i j k ( D 3 + o p ( 1 ) ) ( x i , y j ; t k ) I 1 O ( l ) ε i j k y j y h y K i j k ( x i , y j ; t k ) O ( l ) K i j k ( D 4 + o p ( 1 ) ) ( x i , y j ; t k ) I 1 O ( l ) ε i j k t k t h t K i j k ( x i , y j ; t k ) O ( l ) K i j k ,
where D 1 , D 2 , D 3 and D 4 are constants. By similar arguments for (A1), we can conclude that
A 21 ( l ) = o p ( 1 ) .
Similarly, we have
A 22 ( l ) = o p ( 1 ) .
So,
A 2 ( l ) = o p ( 1 ) .
By similar arguments to those about Proposition 1, we have
| 1 N H ( x i , y j ; t k ) O ( l ) K i j k 1 2 | = o ( 1 ) .
For a function ϕ ( x , y ; t ) satisfying the condition that sup x 2 + y 2 + t 2 1 | ϕ ( x , y ; t ) | b ϕ < , we can have
| 1 N H ( x i , y j ; t k ) I 1 O ( l ) ϕ ( x i x h x , y j y h y ; t k t h t ) K i j k 1 N H ( x i , y j ; t k ) I 1 O ˜ ( l ) ϕ ( x i x h x , y j y h y ; t k t h t ) K i j k | b ϕ | | K | | 1 N H ( x i , y j ; t k ) O ( l ) Δ O ˜ ( l ) 1 = o ( 1 ) ,
where O ( l ) Δ O ˜ ( l ) = ( O ( l ) O ˜ ( l ) ) ( O ( l ) O ˜ ( l ) ) . The last equation above is a direct conclusion of (9). By the above results, we have
A 3 ( l ) ( x , y ; t ) = 2 N H ( x i , y j ; t k ) O ( l ) [ f ( x i , y j ; t k ) a ^ ( l ) b ^ ( l ) ( x i x ) c ^ ( l ) ( y j y ) d ^ ( l ) ( t k t ) ] 2 K i j k = 2 N H ( x i , y j ; t k ) O ( l ) [ f ( x i , y j ; t k ) f ( x τ , y τ ; t τ ) d τ B 0 l d τ B 1 l ξ 200 x i x h x d τ B 2 l ξ 020 y j y h y d τ B 3 l ξ 002 t k t h t ] 2 K i j k + o p ( 1 ) = 2 N H ( x i , y j ; t k ) I 1 O ( l ) + ( x i , y j ; t k ) I 2 O ( l ) [ f ( x i , y j ; t k ) f ( x τ , y τ ; t τ ) d τ B 0 l d τ B 1 l ξ 200 x i x h x d τ B 2 l ξ 020 y j y h y d τ B 3 l ξ 002 t k t h t ] 2 K i j k + o p ( 1 ) = 2 N H ( x i , y j ; t k ) I 1 O ˜ ( l ) + ( x i , y j ; t k ) I 2 O ˜ ( l ) [ f ( x i , y j ; t k ) f ( x τ , y τ ; t τ ) d τ B 0 l d τ B 1 l ξ 200 x i x h x d τ B 2 l ξ 020 y j y h y d τ B 3 l ξ 002 t k t h t ] 2 K i j k + o p ( 1 ) = 2 N H ( x i , y j ; t k ) I 1 O ˜ ( l ) [ d τ B 0 l d τ B 1 l ξ 200 x i x h x d τ B 2 l ξ 020 y j y h y d τ B 3 l ξ 002 t k t h t ] 2 K i j k + 2 N H ( x i , y j ; t k ) I 2 O ˜ ( l ) [ d τ d τ B 0 l d τ B 1 l ξ 200 x i x h x d τ B 2 l ξ 020 y j y h y d τ B 3 l ξ 002 t k t h t ] 2 K i j k + o p ( 1 )
= 2 d τ 2 Q ( 1 l ) B 0 l + B 1 l ξ 200 u + B 2 l ξ 020 v + B 3 l ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w + 2 d τ 2 Q ( 2 l ) 1 B 0 l B 1 l ξ 200 u B 2 l ξ 020 v B 3 l ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w + o p ( 1 ) = d τ 2 ( C τ ( l ) ) 2 + o p ( 1 ) ,
where
C τ ( l ) = ( 2 Q ( 1 l ) B 0 l + B 1 l ξ 200 u + B 2 l ξ 020 v + B 3 l ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w + 2 Q ( 2 l ) 1 B 0 l B 1 l ξ 200 u B 2 l ξ 020 v B 3 l ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w ) 1 / 2 .
Then by equation (A6)–(A8), we have
e ( l ) ( x , y ; t ) = σ 2 + d τ 2 ( C τ ( l ) ) 2 + o p ( 1 ) .
Similarly, we can prove that
e ( x , y ; t ) = σ 2 + d τ 2 ( C τ ) 2 + o p ( 1 ) ,
where
C τ = ( Q ( 1 ) ξ 000 ( 2 ) + ξ 100 ( 2 ) ξ 200 u + ξ 010 ( 2 ) ξ 020 v + ξ 001 ( 2 ) ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w + Q ( 2 ) 1 ξ 000 ( 2 ) ξ 100 ( 2 ) ξ 200 u ξ 010 ( 2 ) ξ 020 v ξ 001 ( 2 ) ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w ) 1 / 2 .
The main difference between this case and the previous case in the proof is in the derivation of the result of (A8). For e ( x , y ; t ) , the corresponding result is
A 3 ( x , y ; t ) = 1 N H ( x i , y j ; t k ) [ f ( x i , y j ; t k ) a ^ ( x , y ; t ) b ^ ( x , y ; t ) ( x i x ) c ^ ( x , y ; t ) ( y j y ) d ^ ( x , y ; t ) ( t k t ) ] 2 K i j k = 1 N H ( x i , y j ; t k ) [ f ( x i , y j ; t k ) f ( x τ , y τ ; t τ ) d τ ξ 000 ( 2 ) d τ ξ 100 ( 2 ) ξ 200 x i x h x d τ ξ 010 ( 2 ) ξ 020 y j y h y d τ ξ 001 ( 2 ) ξ 002 t k t h t ] 2 K i j k + o p ( 1 )
= 1 N H ( x i , y j ; t k ) I 1 + ( x i , y j ; t k ) I 2 [ f ( x i , y j ; t k ) f ( x τ , y τ ; t τ ) d τ ξ 000 ( 2 ) d τ ξ 100 ( 2 ) ξ 200 x i x h x d τ ξ 010 ( 2 ) ξ 020 y j y h y d τ ξ 001 ( 2 ) ξ 002 t k t h t ] 2 K i j k + o p ( 1 ) = 1 N H ( x i , y j ; t k ) I 1 [ d τ ξ 000 ( 2 ) d τ ξ 100 ( 2 ) ξ 200 x i x h x d τ ξ 010 ( 2 ) ξ 020 y j y h y d τ ξ 001 ( 2 ) ξ 002 t k t h t ] 2 K i j k + 1 N H ( x i , y j ; t k ) I 2 [ d τ d τ ξ 000 ( 2 ) d τ ξ 100 ( 2 ) ξ 200 x i x h x d τ ξ 010 ( 2 ) ξ 020 y j y h y d τ ξ 001 ( 2 ) ξ 002 t k t h t ] 2 K i j k + o p ( 1 ) = d τ 2 Q ( 1 ) ξ 000 ( 2 ) + ξ 100 ( 2 ) ξ 200 u + ξ 010 ( 2 ) ξ 020 v + ξ 001 ( 2 ) ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w + d τ 2 Q ( 2 ) 1 ξ 000 ( 2 ) ξ 100 ( 2 ) ξ 200 u ξ 010 ( 2 ) ξ 020 v ξ 001 ( 2 ) ξ 002 w 2 K ( u , v ) K ( w ) d u d v d w + o p ( 1 ) = d τ 2 ( C τ ) 2 + o p ( 1 ) .

Appendix A.4. Proof of Theorem 3

For the case when ( x , y ; t ) Ω h J h , the estimator f ^ ( x , y ; t ) is one of a ^ ( x , y ; t ) , a ^ ( 1 ) ( x , y ; t ) , a ^ ( 2 ) ( x , y ; t ) and ( a ^ ( 1 ) ( x , y ; t ) + a ^ ( 2 ) ( x , y ; t ) ) / 2 , all of which are consistent estimators of f ( x , y ; t ) . So, we have the result in the theorem.
For the case when ( x , y ; t ) J h S h , it is easy to see that we have either i) e ( x , y ; t ) = σ 2 + d τ 2 ( C τ ) 2 + o p ( 1 ) , e ( 1 ) ( x , y ; t ) = σ 2 + o p ( 1 ) , and e ( 2 ) ( x , y ; t ) = σ 2 + d τ 2 ( C τ ( 2 ) ) 2 + o p ( 1 ) , or ii) e ( x , y ; t ) = σ 2 + d τ 2 ( C τ ) 2 + o p ( 1 ) , e ( 1 ) ( x , y ; t ) = σ 2 + d τ 2 ( C τ ( 1 ) ) 2 + o p ( 1 ) , and e ( 2 ) ( x , y ; t ) = σ 2 + o p ( 1 ) . In both cases, we have D ( x , y ; t ) = d τ 2 ( C τ ) 2 + o p ( 1 ) . Therefore, asymptotically D ( x , y ; t ) > u . Since e ( 1 ) ( x , y ; t ) < e ( 2 ) ( x , y ; t ) in i), the estimator f ^ ( x , y ; t ) is a ^ ( 1 ) ( x , y ; t ) in this case, which is a consistent estimator of f ( x , y ; t ) . A similar result follows in the case ii).

References

  1. Zanter, K. Landsat 8 (L8) Data Users Handbook; Version 2; LSDS-1574; Department of the Interior, U.S. Geological Survey: Washington, DC, USA, 2016. Available online: https://landsat.usgs.gov/landsat-8-l8-data-users-handbook (accessed on 1 October 2020).
  2. Qiu, P. Jump regression, image processing and quality control (with discussions). Qual. Eng. 2018, 30, 137–153. [Google Scholar] [CrossRef]
  3. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Pearson: New York, NY, USA, 2018. [Google Scholar]
  4. Qiu, P. Jump surface estimation, edge detection, and image restoration. J. Am. Stat. Assoc. 2007, 102, 745–756. [Google Scholar] [CrossRef]
  5. Geman, S.; Geman, D. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 1984, 6, 721–741. [Google Scholar] [CrossRef]
  6. Besag, J. Spatial interaction and the statistical analysis of lattice systems (with discussions). J. R. Stat. Soc. Ser. B 1974, 36, 192–236. [Google Scholar]
  7. Fessler, J.A.; Erdogan, H.; Wu, W.B. Exact distribution of edgepreserving MAP estimators for linear signal models with Gaussian measurement noise. IEEE Trans. Image Process. 2000, 9, 1049–1055. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Perona, P.; Malik, J. Scale space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 629–639. [Google Scholar] [CrossRef] [Green Version]
  9. Weickert, J. Anisotropic Diffusion in Imaging Processing; Teubner: Stuttgart, Germany, 1998. [Google Scholar]
  10. Beck, A.; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Rudin, L.; Osher, S.; Fatemi, E. Jump regression, Nonlinear total variation based noise removal algorithms. Phys. D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  12. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral Image Denoising Employing a Spectral–Spatial Adaptive Total Variation Model. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3660–3677. [Google Scholar] [CrossRef]
  13. Chang, G.S.; Yu, B.; Vetterli, M. Spatially adaptive wavelet thresholding with context modeling for image denoising. IEEE Trans. Image Process. 2000, 9, 1522–1531. [Google Scholar] [CrossRef] [Green Version]
  14. Mrázek, P.; Weickert, J.; Steidl, G. Correspondences between wavelet shrinkage and nonlinear diffusion. In Scale Space Methods in Computer Vision; Griffin, L.D., Lillholm, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  15. Gijbels, I.; Lambert, A.; Qiu, P. Edge-preserving image denoising and estimation of discontinuous surfaces. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1075–1087. [Google Scholar] [CrossRef]
  16. Qiu, P. Discontinuous regression surfaces fitting. Ann. Stat. 1998, 26, 2218–2245. [Google Scholar] [CrossRef]
  17. Qiu, P. Jump-preserving surface reconstruction from noisy data. Ann. Inst. Stat. Math. 2009, 61, 715–751. [Google Scholar] [CrossRef]
  18. Qiu, P.; Mukherjee, P.S. Edge structure preserving 3-D image denoising by local surface approximation. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1457–1468. [Google Scholar]
  19. Polzehl, J.; Spokoiny, V.G. Adaptive weights smoothing with applications to image restoration. J. R. Stat. Soc. Ser. B 2000, 62, 335–354. [Google Scholar] [CrossRef]
  20. Kervrann, C.; Boulanger, J. Optimal Spatial Adaptation for Patch-Based Image Denoising. IEEE Trans. Image Process. 2006, 15, 2866–2878. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Jain, P.; Tyagi, V. A survey of edge-preserving image denoising methods. Inf. Syst. Front. 2016, 18, 159–170. [Google Scholar] [CrossRef]
  22. Qiu, P. Image Processing and Jump Regression Analysis; John Wiley & Sons: New York, NY, USA, 2005. [Google Scholar]
  23. Fan, J.; Gijbels, I. Local Polynomial Modelling and Its Applications; Chapman and Hall: New York, NY, USA, 1996. [Google Scholar]
  24. Altman, N.S. Kernel smoothing of data with correlated errors. J. Am. Stat. Assoc. 1990, 85, 749–759. [Google Scholar] [CrossRef]
  25. Opsomer, J.; Wang, Y.; Yang, Y. Nonparametric regression with correlated errors. Stat. Sci. 2001, 16, 134–153. [Google Scholar] [CrossRef]
  26. Brabanter, K.D.; Brabanter, J.D.; Suykens, J.; Moor, B. Kernel regression in the presence of correlated errors. J. Mach. Learn. Res. 2011, 12, 1955–1976. [Google Scholar]
  27. Rudin, L.; Osher, S.; Fatemi, E. neuRosim: An R package for generating fMRI data. J. Stat. Softw. 2011, 44, 1–18. [Google Scholar]
  28. Hall, P.; Qiu, P. Blind deconvolution and deblurring in image analysis. Stat. Sin. 2007, 17, 1483–1509. [Google Scholar]
  29. Svoboda, D.; Ulman, V.; Kováč, P.; Šalingová, B.; Tesařová, L.; Koutná, I.K.; Matula, P. Vascular network formation in silico using the extended cellular potts model. IEEE Int. Conf. Image Process. 2016, 3180–3183. [Google Scholar]
  30. Shuford, W.D.; Warnock, N.; Molina, K.C.; Sturm, K. The Salton Sea as critical habitat to migratory and resident waterbirds. Hydrobiologia 2002, 473, 255–274. [Google Scholar] [CrossRef]
  31. Davydov, Y.A. Convergence of Distributions Generated by Stationary Stochastic Process. Theory Probab. Its Appl. 1968, 13, 691–696. [Google Scholar] [CrossRef]
Figure 1. Two Landsat images of the Las Vegas area taken in 1984 (left panel) and 2007 (right panel).
Figure 1. Two Landsat images of the Las Vegas area taken in 1984 (left panel) and 2007 (right panel).
Entropy 23 01332 g001
Figure 2. The neighborhood O ( x , y ; t ) is divided into two parts by a plane that passes ( x , y ; t ) and is perpendicular to the estimated gradient direction G ^ ( x , y ; t ) .
Figure 2. The neighborhood O ( x , y ; t ) is divided into two parts by a plane that passes ( x , y ; t ) and is perpendicular to the estimated gradient direction G ^ ( x , y ; t ) .
Entropy 23 01332 g002
Figure 3. (a) The true image intensity function f ( x , y ; t ) at t = 0.01 (left) and t = 0.25 (right). (b) The temporal profile f ( 0.25 , 0.25 ; t ) when t changes in [ 0 , 1 ] .
Figure 3. (a) The true image intensity function f ( x , y ; t ) at t = 0.01 (left) and t = 0.25 (right). (b) The temporal profile f ( 0.25 , 0.25 ; t ) when t changes in [ 0 , 1 ] .
Entropy 23 01332 g003
Figure 4. The first column shows the observed images at t = 0.5 when σ = 0.2 and ρ = 0.1 (1st row), 0.3 (2nd row), and 0.5 (3rd row). Second to sixth columns show the denoised images by LLK-C, LLK, GLQ, NEW-C and NEW, respectively.
Figure 4. The first column shows the observed images at t = 0.5 when σ = 0.2 and ρ = 0.1 (1st row), 0.3 (2nd row), and 0.5 (3rd row). Second to sixth columns show the denoised images by LLK-C, LLK, GLQ, NEW-C and NEW, respectively.
Entropy 23 01332 g004
Figure 5. The 1st, 50th and 100th cell images of the image sequence for describing a vasculogenesis process.
Figure 5. The 1st, 50th and 100th cell images of the image sequence for describing a vasculogenesis process.
Entropy 23 01332 g005
Figure 6. First column shows the 50th observed cell image after the spatio-temporally correlated noise with ρ = 0.1 (1st row), 0.3 (2nd row) or 0.5 (3rd row) being added. The second to sixth columns show the denoised images by LLK-C, LLK, GLQ, NEW-C and NEW, respectively.
Figure 6. First column shows the 50th observed cell image after the spatio-temporally correlated noise with ρ = 0.1 (1st row), 0.3 (2nd row) or 0.5 (3rd row) being added. The second to sixth columns show the denoised images by LLK-C, LLK, GLQ, NEW-C and NEW, respectively.
Entropy 23 01332 g006
Figure 7. The first image is the observed landsat image of the Salton Sea region taken on 28 April 2001 after the spatio-temporally correlated noise with σ = 0.3 and ρ = 0.3 being added. Second to sixth images are its denoised versions by LLK-C, LLK, GLQ, NEW-C, and NEW, respectively.
Figure 7. The first image is the observed landsat image of the Salton Sea region taken on 28 April 2001 after the spatio-temporally correlated noise with σ = 0.3 and ρ = 0.3 being added. Second to sixth images are its denoised versions by LLK-C, LLK, GLQ, NEW-C, and NEW, respectively.
Entropy 23 01332 g007
Table 1. In each entry, MSE of f ^ ( x , y ; t ) in (6) is presented in the first line with its standard error (in parenthesis); the corresponding values of ( h x , h t , u ) chosen by the modified CV procedure (7) and (8) is presented in the second line; the optimal MSE is presented in the third line with its standard error (in parenthesis); the optimal values of ( h x y , h t , u ) are presented in the fourth line. MSE in the table has been multiplied by 10 3 and standard error has been multiplied by 10 5 .
Table 1. In each entry, MSE of f ^ ( x , y ; t ) in (6) is presented in the first line with its standard error (in parenthesis); the corresponding values of ( h x , h t , u ) chosen by the modified CV procedure (7) and (8) is presented in the second line; the optimal MSE is presented in the third line with its standard error (in parenthesis); the optimal values of ( h x y , h t , u ) are presented in the fourth line. MSE in the table has been multiplied by 10 3 and standard error has been multiplied by 10 5 .
n t = 50 n t = 100
σ ρ n x = 64 n x = 128 n x = 64 n x = 128
0.10.1 0.65 ( 0.80 ) 0.30 ( 0.25 ) 0.48 ( 0.43 ) 0.26 ( 0.10 )
(0.03, 0.10, 0.05)(0.03, 0.08, 0.025)(0.03, 0.10, 0.05)(0.02, 0.07, 0.05)
0.32 ( 0.46 ) 0.20 ( 0.14 ) 0.37 ( 0.36 ) 0.19 ( 0.08 )
(0.04, 0.07, 0.025)(0.03, 0.05, 0.025)(0.03, 0.08, 0.025)(0.02, 0.05, 0.025)
0.3 0.60 ( 0.45 ) 0.33 ( 0.16 ) 0.59 ( 0.39 ) 0.33 ( 0.15 )
(0.04, 0.10, 0.05)(0.03, 0.07, 0.025)(0.03, 0.10, 0.05)(0.02, 0.07, 0.025)
0.49 ( 0.35 ) 0.30 ( 0.16 ) 0.50 ( 0.37 ) 0.29 ( 0.22 )
(0.04, 0.08, 0.025)(0.03, 0.06, 0.025)(0.03, 0.08, 0.025)(0.03, 0.04, 0.025)
0.5 1.25 ( 1.24 ) 0.80 ( 0.22 ) 0.81 ( 0.55 ) 0.64 ( 0.21 )
(0.03, 0.10, 0.05)(0.02, 0.07, 0.025)(0.03, 0.10, 0.05)(0.02, 0.04, 0.025)
0.77 ( 0.65 ) 0.49 ( 0.24 ) 0.74 ( 0.46 ) 0.45 ( 0.25 )
(0.04, 0.09, 0.025)(0.03, 0.06, 0.025)(0.03, 0.09, 0.025)(0.03, 0.04, 0.025)
0.20.1 1.14 ( 1.13 ) 0.68 ( 0.38 ) 1.02 ( 0.74 ) 0.56 ( 0.26 )
(0.04, 0.10, 0.025)(0.03, 0.08, 0.025)(0.04, 0.10, 0.025)(0.03, 0.07, 0.025)
1.11 ( 0.86 ) 0.66 ( 0.33 ) 0.93 ( 0.71 ) 0.54 ( 0.31 )
(0.04, 0.09, 0.025)(0.03, 0.07, 0.025)(0.04, 0.08, 0.025)(0.03, 0.05, 0.025)
0.3 1.69 ( 0.91 ) 1.03 ( 0.54 ) 1.32 ( 1.08 ) 0.78 ( 0.41 )
(0.04, 0.10, 0.025)(0.03, 0.08, 0.025)(0.04, 0.10, 0.025)(0.03, 0.07, 0.025)
1.69 ( 1.24 ) 1.03 ( 0.54 ) 1.29 ( 1.12 ) 0.78 ( 0.41 )
(0.04, 0.11, 0.025)(0.03, 0.08, 0.025)(0.04, 0.09, 0.025)(0.03, 0.07, 0.025)
0.5 3.25 ( 1.74 ) 2.88 ( 0.78 ) 1.95 ( 1.85 ) 2.61 ( 0.58 )
(0.04, 0.07, 0.025)(0.02, 0.07, 0.025)(0.04, 0.09, 0.025)(0.02, 0.04, 0.025)
2.59 ( 2.23 ) 1.54 ( 1.32 ) 1.91 ( 1.78 ) 1.21 ( 0.43 )
(0.05, 0.10, 0.025)(0.04, 0.09, 0.025)(0.04, 0.11, 0.025)(0.03, 0.08, 0.025)
0.30.1 2.32 ( 1.91 ) 1.26 ( 1.03 ) 1.59 ( 0.81 ) 0.92 ( 0.34 )
(0.05, 0.13, 0.025)(0.04, 0.09, 0.025)(0.04, 0.11, 0.025)(0.03, 0.08, 0.025)
2.28 ( 2.58 ) 1.26 ( 1.03 ) 1.59 ( 0.65 ) 0.92 ( 0.34 )
(0.05, 0.11, 0.025)(0.04, 0.09, 0.025)(0.04, 0.10, 0.025)(0.03, 0.08, 0.025)
0.3 3.15 ( 2.28 ) 1.72 ( 1.37 ) 2.26 ( 1.53 ) 1.36 ( 0.50 )
(0.05, 0.13, 0.025)(0.04, 0.09, 0.025)(0.04, 0.11, 0.025)(0.03, 0.08, 0.025)
3.14 ( 2.45 ) 1.71 ( 1.52 ) 2.21 ( 1.31 ) 1.33 ( 0.41 )
(0.05, 0.14, 0.025)(0.04, 0.10, 0.025)(0.04, 0.13, 0.025)(0.04, 0.09, 0.025)
0.5 6.78 ( 3.46 ) 6.81 ( 2.00 ) 4.18 ( 2.72 ) 6.33 ( 1.43 )
(0.04, 0.09, 0.05)(0.02, 0.07, 0.05)(0.04, 0.10, 0.025)(0.02, 0.04, 0.05)
4.46 ( 4.94 ) 2.48 ( 2.38 ) 3.18 ( 3.42 ) 1.88 ( 0.56 )
(0.06, 0.16, 0.025)(0.05, 0.11, 0.025)(0.05, 0.14, 0.025)(0.04, 0.10, 0.025)
Table 2. In each entry, the first line is the MSE value with its standard error (in parenthesis), and the second line is the EP value. MSE values in the table are in the unit of 10 3 and the standard error values are in the unit of 10 5 .
Table 2. In each entry, the first line is the MSE value with its standard error (in parenthesis), and the second line is the EP value. MSE values in the table are in the unit of 10 3 and the standard error values are in the unit of 10 5 .
σ ρ LLK-CLLKGLQNEW-CNEW
0.10.1 2.06 ( 0.08 ) 2.10 ( 0.06 ) 0.60 ( 0.18 ) 0.24 ( 0.11 ) 0.26 ( 0.10 )
73.68 % 18.43 % 28.24 % 12.32 % 7.48 %
0.3 3.04 ( 0.14 ) 2.28 ( 0.09 ) 0.95 ( 0.18 ) 2.93 ( 0.40 ) 0.33 ( 0.15 )
124.48 % 34.40 % 43.69 % 131.28 % 10.58 %
0.5 3.89 ( 0.24 ) 3.23 ( 0.21 ) 1.42 ( 0.42 ) 3.77 ( 0.48 ) 0.64 ( 0.21 )
141.47 % 95.86 % 57.40 % 148.17 % 28.86 %
0.20.1 4.16 ( 0.25 ) 2.93 ( 0.15 ) 1.51 ( 0.38 ) 0.86 ( 0.25 ) 0.56 ( 0.26 )
142.65 % 51.78 % 54.40 % 39.01 % 9.14 %
0.3 9.39 ( 0.52 ) 3.67 ( 0.25 ) 2.87 ( 0.51 ) 9.60 ( 0.78 ) 0.78 ( 0.41 )
291.31 % 82.84 % 94.59 % 295.72 % 15.08 %
0.5 12.80 ( 0.94 ) 11.21 ( 0.86 ) 7.75 ( 1.32 ) 13.12 ( 1.16 ) 2.61 ( 0.58 )
326.38 % 289.71 % 203.86 % 334.62 % 84.24 %
0.30.1 7.88 ( 0.57 ) 3.94 ( 0.26 ) 3.17 ( 0.86 ) 1.01 ( 0.37 ) 0.92 ( 0.34 )
235.43 % 82.24 % 73.18 % 23.36 % 15.41 %
0.3 19.97 ( 1.15 ) 5.56 ( 0.50 ) 12.36 ( 0.63 ) 19.97 ( 1.16 ) 1.36 ( 0.50 )
461.12 % 133.33 % 261.31 % 461.13 % 25.78 %
0.5 27.64 ( 2.09 ) 23.75 ( 1.92 ) 15.75 ( 1.71 ) 28.04 ( 2.29 ) 6.33 ( 1.43 )
514.22 % 458.82 % 292.50 % 518.16 % 144.58 %
Table 3. Results for denoising a sequence of 100 cell images. In each entry, the first line is the MSE value and its standard error (in parenthesis), and the second line is the EP value. MSE values in the table are in the unit of 10 3 and the standard errors are in the unit of 10 5 .
Table 3. Results for denoising a sequence of 100 cell images. In each entry, the first line is the MSE value and its standard error (in parenthesis), and the second line is the EP value. MSE values in the table are in the unit of 10 3 and the standard errors are in the unit of 10 5 .
σ ρ LLK-CLLKGLQNEW-CNEW
0.10.1 1.69 ( 0.11 ) 0.97 ( 0.08 ) 1.67 ( 0.12 ) 1.69 ( 0.12 ) 1.35 ( 0.12 )
63.30 % 5.53 % 18.88 % 63.31 % 18.52 %
0.3 2.36 ( 0.16 ) 1.43 ( 0.14 ) 1.94 ( 0.18 ) 2.36 ( 0.16 ) 1.51 ( 0.19 )
77.54 % 31.64 % 25.72 % 77.55 % 7.28 %
0.5 3.21 ( 0.25 ) 2.82 ( 0.24 ) 2.28 ( 0.29 ) 3.21 ( 0.25 ) 1.92 ( 0.31 )
88.68 % 75.95 % 30.68 % 88.68 % 10.11 %
0.20.1 3.22 ( 17.00 ) 1.47 ( 5.54 ) 3.93 ( 0.29 ) 3.22 ( 17.00 ) 1.67 ( 0.25 )
85.64 % 13.57 % 76.53 % 85.64 % 16.28 %
0.3 8.71 ( 0.56 ) 2.34 ( 0.35 ) 5.00 ( 0.43 ) 8.71 ( 0.56 ) 2.17 ( 0.45 )
189.74 % 42.07 % 91.44 % 189.75 % 4.88 %
0.5 12.12 ( 0.94 ) 10.35 ( 0.88 ) 6.41 ( 0.86 ) 12.14 ( 0.96 ) 4.48 ( 0.90 )
213.90 % 187.93 % 102.68 % 214.07 % 59.86 %
0.30.1 3.16 ( 0.50 ) 2.01 ( 0.28 ) 5.47 ( 0.53 ) 3.16 ( 0.50 ) 1.93 ( 0.40 )
47.15 % 22.46 % 54.20 % 47.15 % 10.91 %
0.3 19.30 ( 1.23 ) 4.29 ( 0.71 ) 10.11 ( 0.85 ) 19.30 ( 1.23 ) 2.82 ( 0.77 )
308.32 % 79.75 % 161.91 % 308.32 % 14.37 %
0.5 26.96 ( 2.09 ) 22.88 ( 1.95 ) 13.36 ( 1.82 ) 27.00 ( 2.13 ) 8.75 ( 1.85 )
345.91 % 306.28 % 180.35 % 346.14 % 113.48 %
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yi, F.; Qiu, P. Edge-Preserving Denoising of Image Sequences. Entropy 2021, 23, 1332. https://doi.org/10.3390/e23101332

AMA Style

Yi F, Qiu P. Edge-Preserving Denoising of Image Sequences. Entropy. 2021; 23(10):1332. https://doi.org/10.3390/e23101332

Chicago/Turabian Style

Yi, Fan, and Peihua Qiu. 2021. "Edge-Preserving Denoising of Image Sequences" Entropy 23, no. 10: 1332. https://doi.org/10.3390/e23101332

APA Style

Yi, F., & Qiu, P. (2021). Edge-Preserving Denoising of Image Sequences. Entropy, 23(10), 1332. https://doi.org/10.3390/e23101332

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop