Next Article in Journal
Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure
Previous Article in Journal
An Ensemble Model-Based Estimation of Nitrogen Dioxide in a Southeastern Coastal Region of China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse DDK: A Data-Driven Decorrelation Filter for GRACE Level-2 Products

1
School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China
2
Department of Geoscience and Remote Sensing, Delft University of Technology, 2628 CN Delft, The Netherlands
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(12), 2810; https://doi.org/10.3390/rs14122810
Submission received: 13 April 2022 / Revised: 1 June 2022 / Accepted: 9 June 2022 / Published: 11 June 2022
(This article belongs to the Section Earth Observation Data)

Abstract

:
High-frequency and correlated noise filtering is one of the important preprocessing steps for GRACE level-2 products before calculating mass anomaly. Decorrelation and denoising kernel (DDK) filters are usually considered as such optimal filters to solve this problem. In this work, a sparse DDK filter is proposed. This is achieved by replacing Tikhonov regularization in traditional DDK filters with weighted L1 norm regularization. The proposed sparse DDK filter adopts a time-varying error covariance matrix, while the equivalent signal covariance matrix is adaptively determined by the Gravity Recovery and Climate Experiment (GRACE) monthly solution. The covariance matrix of the sparse DDK filtered solution is also developed from the Bayesian and error-propagation perspectives, respectively. Furthermore, we also compare and discuss the properties of different filters. The proposed sparse DDK has all the advantages of traditional filters, such as time-varying, location inhomogeneity, and anisotropy, etc. In addition, the filtered solution is sparse; that is, some high-degree and high-order terms are strictly zeros. This sparsity is beneficial in the following sense: high-degree and high-order sparsity mean that the dominating noise in high-degree and high-order terms is completely suppressed, at a slight cost that the tiny signals of these terms are also discarded. The Center for Space Research (CSR) GRACE monthly solutions and their error covariance matrices, from January 2004 to December 2010, are used to test the performance of the proposed sparse DDK filter. The results show that the sparse DDK can effectively decorrelate and denoise these data.

Graphical Abstract

1. Introduction

Global climatic and environmental problems, such as sea-level rise, polar ice sheet and glacier melting, drought, and flood disasters, are closely related to the mass redistribution at the Earth’s surface [1,2]. The GRACE project has been successfully applied to retrieve the mass transportation within/between different earth systems, including solid earth, cryosphere, ocean, terrestrial water, etc. However, there are always inevitable high-frequency and stripping errors in unconstrained GRACE level-2 products. These are caused by an anisotropic mission sensitivity associated with a specific orbit and set-up of the mission. As a result, filtering of unconstrained GRACE level-2 solutions is a requisite step before calculating mass transportation.
DDK filters represent a class of de-striping filters that can reduce both high-frequency and stripping errors effectively. Those filters are designed from a regularization/inversion perspective [3,4,5,6,7]. They are different from empirical decorrelation filters [8,9,10], which also find wide applications. There are also many other filters, such as [11,12,13,14,15,16,17,18], but these are out of the scope of this study since we mainly focus on DDK-like filters in this work. Usually, two types of input are necessary to design a DDK filter, namely the noise covariances and the signal covariances. Several different variants of DDK filters have been proposed in the literature, and they are distinguished by their specific selections of these two inputs.
A full error covariance matrix is vital for a filter to be able to de-stripe a solution. This is emphasized in [3], where the DDK filter is designed from the perspective of Wiener filtering. Note that the Wiener-filter perspective was also followed earlier in [19]. However, a diagonal error covariance matrix was employed there rather than a full one, which is exactly the reason why that filter could not destripe solutions efficiently [19]. At the time when the DDK filtering method was proposed, the full error covariance matrices were generally inaccessible for most users, and hence, the computation of an approximate full error covariance matrix from publicly available data was proposed in [5]. It is fortunate that, nowadays, the full error covariance matrices are becoming available from more and more processing centers [6].
The signal covariance matrices are often different for different variants of DDK filters, either in their structures or in the sources of input information. With regard to structures, a diagonal signal covariance matrix in the spectral domain was defined in [4,5,6], while a diagonal structure in the spatial domain was followed in [3]. The spectral-domain covariance matrix propagated from a diagonal spatial-domain covariance matrix is generally full. It is stated that neglecting the off-diagonal elements of such a full covariance matrix can result in visible differences in the filtered solutions compared to the case when the off-diagonal elements are preserved. However, we think this difference does not necessarily mean that using the full signal covariance matrix in the spatial domain results in a better performance. Concerning the sources of input information, the signal covariances can be obtained from prior geophysical models [4,5] or from the data themselves [3]. The latter approach can be called data-driven or data adaptive; it can be viewed as a variance and covariance component estimation. One of the main potential merits of data adaptive methods is that they cannot be misled by inadequate prior models. On the other hand, it is found in [6] that there is no significant difference between the results obtained, no matter whether the signal covariance matrix is from a prior model or from the data themselves.
In this work, we propose a new DDK scheme for filtering GRACE level-2 geopotential Spherical Harmonic (SH) coefficients. The proposed DDK filter assumes a diagonal structure of the signal covariance matrix in the spectral domain, which is similar to [4,5,6]. The proposed filter is also data adaptive, similar to [3]. The filter is obtained simply by replacing the L2 norm (of the SH coefficients vector) in Tikhonov regularization with the weighted L1 norm. A linear regression problem constrained with L1-norm regularization is nothing but the famous Lasso problem [20], widely known in the statistical community [21]. This is still a convex problem [22], and the convexity of the L1-norm is exactly the reason why it is widely used compared to other sparse regularizations. The applications cover many different fields, such as statistics, machine learning, and signal processing. It is pointed out that the proposed approach can be understood as a DDK filter with diagonal signal covariance matrices (in the spectral domain) automatically determined from the data. This interpretation follows from viewing the L1-norm as a diagonally weighted L2-norm, which will be introduced further in the next section. In addition, the proposed sparse DDK filter employs the monthly time-varying full error covariance matrices, which have been made available by more and more processing centers. This time-varying covariance information has been proved to be superior to a stationary one [6].
As a byproduct, the filtered solutions (namely, the geopotential SH coefficients) obtained with the proposed approach are sparse. This is the reason why we call our method the sparse DDK filter. A sparse vector is a vector with some of its elements being exactly zero. The reason why the solution can be sparse is briefly explained below. The data to be processed (filtered), namely the SH coefficients, are residual variables, which are obtained by subtracting the SH coefficients provided by processing centers from their corresponding reference values, e.g., a long-term mean or linear trend. Therefore, it is not unnatural that some of the residual SH coefficients are zeros or have small magnitudes. This means that vectors composed of these SH coefficients are compressible. Then, it is feasible to employ a sparse model (namely the sparse filtered SH solution) to fit such data. Furthermore, a sparse model is also beneficial in terms of model generalization, just because it is simple, and a simpler model is often better from the viewpoint of model selection or statistical learning [21,23]. Note that besides the filtered solution mentioned above, a DDK filter itself, namely the filtering matrix F0 defined in Equation (5) in the following section, is also sparse (or can be made sparse), with some coefficients equal to zero (in the spectral domain). This sparsity is due to the (approximate) independence between different orders/parities [4]. This (approximate) independence is exactly the starting point to design the empirical decorrelation filters [8], as well as the starting point to simplify the DDK filters [4].
The paper is organized as follows. After this introduction section, the methodology is presented in Section 2, including an introduction to the Lasso problem, the algorithm employed to iteratively compute the filtered geopotential SH coefficients, the method to select appropriate hyperparameters (regularization parameters), and a variance/covariance analysis. In Section 3, the relevant regularization strategies are discussed and compared, and the properties of the proposed sparse DDK filter are discussed, with comparisons to other filtering approaches. Data analysis is presented in Section 4, including signal and noise evaluation for both global and regional cases, followed by the conclusion and outlook in Section 5.

2. Materials and Methods

Let us denote the monthly residual geopotential SH coefficients (with a long-term mean removed) as a vector x. The corresponding time-varying full error covariance matrix, which is assumed to be available, is denoted as Q. The task is designing a DDK-type filter, denoted as F0, and computing the filtered solution as x ^ = F 0 x . No other data are needed. The other input needed to design a DDK filter, namely the signal covariance matrix (in the spectral domain here), denoted as S, is implicitly obtained from the available data, namely x together with Q.

2.1. A Brief Introduction to the Lasso Problem

Assume there is a measurement model expressed as follows:
y = Bβ + e,
where y, B, β and e are the measurement vector, design matrix, parameter vector to be estimated, and noise vector, respectively. The covariance matrix of measurement noise e is denoted as Q. For the sparse L1-norm regularized case, the estimation is defined as the following:
β ^ = argmin β ^ [ ( y B β ^ ) T Q 1 ( y B β ^ ) + μ β ^ 1 ] ,
where µ is a regularization parameter. The estimate β ^ is nothing but the Lasso solution [20]. Prior constraints on the solution are imposed by including the regularization term μ β ^ 1 , which is similar to that in the conventional DDK filters [5]. However, the L1 norm is employed in this regularization, which is different from the L2 norm in the Tikhonov regularization used in conventional DDK filters. The fitting error term ( y B β ^ ) T Q 1 ( y B β ^ ) implies the Gaussian distribution of measurement errors. The Lasso solution can be viewed as a Bayesian solution with a Laplace prior rather than a Gaussian prior implied in conventional DDK filters [24]. The relevance is within the GRACE SH coefficients, making it non-Gaussian, so we adopt a Laplace prior to constrain the magnitudes of the parameters rather than constraining its energy, as implied by a Gaussian prior. It should be noted the regularization term μ β ^ 1 may appear in the form of μ W β ^ 1 (with W being regularization matrix) in some textbooks [25], and a problem involving μ W β ^ 1 is usually called weighted Lasso. As we will show in the following subsection, a weighted Lasso problem can be converted into a standard one as presented in (2). For simplicity, we only show the standard Lasso in this subsection.
The Lasso solution in Equation (2) is calculated with an efficient algorithm called FISTA [26], which will be briefly introduced in the following subsection. The regularization coefficient μ provides a tradeoff between the fitting term ( y B β ^ ) T Q 1 ( y B β ^ ) (needed to extract signal) and the smoothing term β ^ 1 (needed to suppress noise in the measurement vector y). An appropriate value for μ is selected objectively according to a tailored variant of Generalized Cross-Validation (GCV), which is presented in the following subsection. The solution is sparse so that some elements of β ^ are exactly zero; this means that the Lasso method is also playing the role of model coefficient selection, in an automatic way. The tuning of the regularization parameter in Lasso implies that the L1-norm regularization can produce solutions with different degrees of smoothing/denoising and then perform model coefficient selection automatically (according to GCV), namely to balance fitting and smoothing appropriately. This property can facilitate the appropriate separation of signal and noise in DDK filtering, that is, an automatic extraction of the real mass anomaly signal from noisy monthly solutions.

2.2. A Sparse DDK Filter with Weighted Lasso

The sparse DDK filter is designed based on a weighted Lasso, a special variant of the so-called adaptive Lasso in [27]. In order to approximate the attenuation of gravity field signal with increasing degree, we adopt a power law to construct a weighting matrix, namely W ¯ 1 = R ¯ = diag [ l p ] , with l being the SH degree and the power index p taken as four empirically [5]. This weighting matrix is then introduced to better approximate the mass anomaly signal (in the spectral domain), so the sparse DDK filtering solution with weighted Lasso is a tailored version of Equation (2):
x ^ 0 = argmin x ^ [ ( x x ^ ) T Q 1 ( x x ^ ) + μ W ¯ x ^ 1 ] .
where the design matrix B in Equation (2) is an identity matrix here. The weighted Lasso solution x ^ 0 is exactly the filtered solution of our sparse DDK filter. We can simply calculate this solution by solving the following standard Lasso problem:
x ^ 0 = W ¯ 1 z ^ 1 = R ¯ { argmin z ^ [ ( x R ¯ z ^ ) T Q 1 ( x R ¯ z ^ ) + μ z ^ 1 ] } .
The standard Lasso problem involved in the weighted Lasso solution in Equation (4) can be abstracted as Equation (2). The matrix B, the measurement vector y, and the parameter vector β in Equation (2) correspond to the weighting matrix R ¯ , the unfiltered SH coefficient vector x and the transformed unfiltered SH coefficient vector z ^ = W ¯ x ^ in Equation (4), respectively. For the sparse DDK filtered SH solution x ^ 0 in Equation (4), we firstly solve a standard Lasso problem for the transformed variable z ^ 1 = W ¯ x ^ , and then transform the solution back to the original variable, namely x ^ 0 = R ¯ z ^ 1 . The FISTA algorithm for the standard Lasso solution z ^ 1 , and the tailored GCV criterion for the selection of the hyperparameter μ, are introduced in the following Section 2.3.
After obtaining the sparse DDK filtered solution with Equation (4), it is also meaningful to construct an equivalent DDK filtering matrix (or an equivalent filter) from this solution so as to analyze the properties of a sparse DDK filter. This is realized by viewing the L1-norm regularized solution in sparse DDK filter as a weighted L2-norm regularized solution. To be specific, let W 0 1 = R 0 = diag [ | z ^ 1 | ] , then z ^ 1 1 = z ^ 1 T W 0 z ^ 1 . This can be proved as follows. z ^ 1 1 is the sum of the absolute values of all elements in vector z ^ 1 . In z ^ 1 T W 0 z ^ 1 , z ^ 1 T W 0 is a vector with all elements being ±1, with the sign of each element being the same as that of the corresponding element in z ^ 1 (because W 0 1 = diag [ | z ^ 1 | ] ). Then, z ^ 1 T W 0 z ^ 1 is also the sum of the absolute values of all the elements in vector z ^ 1 . This completes the proof. The Lasso problem given by Equation (4) can be viewed as a weighted L2-norm regularization, and accordingly, the equivalent DDK filtered solution of the proposed sparse DDK can be represented as follows:
x ^ 0 = R ¯ { argmin z ^ [ ( x R ¯ z ^ ) T Q 1 ( x R ¯ z ^ ) + μ z ^ T W 0 z ^ ] } = R ¯ ( R ¯ Q 1 R ¯ + μ W 0 ) 1 R ¯ Q 1 x = F 0 x .
where the equivalent DDK filtering matrix (in the spectral domain) of sparse DDK is defined accordingly as follows:
F 0 = R ¯ ( R ¯ Q 1 R ¯ + μ W 0 ) 1 R ¯ Q 1 = I μ Q ( μ Q + R ¯ R 0 R ¯ ) 1 .
This equivalent DDK filtering matrix is nothing but the one constructed based on Tikhonov regularization such as a conventional DDK filtering matrix. Using this equivalent filtering matrix, we can also obtain the equivalent sparse DDK filtered solution x ^ E = F 0 x . Note that this equivalent solution is strictly speaking non-sparse because the elements corresponding to zero-elements in the original sparse solution x ^ 0 are very small (close to zeros). Therefore, in terms of spatial resolution, this equivalent filtered solution does not present performance degradation.
Next, we also develop the signal covariance matrix of the sparse DDK filtered solution x ^ 0 . Assuming σ ^ 0 2 is a posterior estimate of the variance component to scale the error covariance matrix Q, (how to calculate σ ^ 0 2 is detailed in Section 2.4), then Equation (6) can be written as:
F 0 = R ¯ ( R ¯ ( σ ^ 0 2 Q ) 1 R ¯ + μ W 0 ) 1 R ¯ ( σ ^ 0 2 Q ) 1 = ( ( σ ^ 0 2 Q ) 1 + μ R ¯ 1 W 0 R ¯ 1 ) 1 ( σ ^ 0 2 Q ) 1 = ( Q 1 + μ σ ^ 0 2 R ¯ 1 W 0 R ¯ 1 ) 1 Q 1 = ( Q 1 + S 1 ) 1 Q 1
Therefore, the corresponding posterior signal covariance matrix after equivalent DDK filtering is readily available as follows:
S = ( μ σ ^ 0 2 R ¯ 1 W 0 R ¯ 1 ) 1 = ( μ σ ^ 0 2 ) 1 R ¯ R 0 R ¯ ,
Clearly this posterior signal covariance matrix is an adjusted version of the prior one R0. Let us review the proposed sparse DDK filter. The sparse filtered solution is calculated based on Equation (4), using the numerical algorithm developed in the following subsection. In order to better compare the conventional DDK filters and the sparse DDK filter, an equivalent DDK filtering matrix F0 of the latter, in Equation (6), is developed from the obtained filtered solution, as well as its equivalent DDK signal covariance matrix in Equation (8).

2.3. Computation Algorithm

In general, no analytical solution is available for a standard Lasso problem in Equation (2), due to the nonlinearity introduced by the L1-norm regularization. Many algorithms have been proposed to solve this kind of problem numerically. In this work, a highly efficient method, called FISTA, is employed [26]. This algorithm has also been used in our previous studies [28,29,30]. It is a proximal gradient algorithm, which is further accelerated by the Nesterov momentum method [31]. In FISTA, we conduct the following evaluations iteratively until convergence:
β k = T λ μ ( ϑ k 2 μ B T Q 1 B ϑ k + 2 μ B T Q 1 y ) s k + 1 = 1 + 1 + 4 s k 2 2 ϑ k + 1 = β k + s k 1 s k + 1 ( β k β k 1 )
The iterations are initialized as β 0 = ϑ 1 = 0 and s1 = 1. The iterations are terminated when β k β k 1 2 β k 1 2 δ with the threshold δ = 10−5 predefined empirically. The soft thresholding operator T α ( x ) is element-wise; namely, the jth element reads as [ T α ( x ) ] j = { x j α , if   x j > α   x j + α , if   x j < α 0 , otherwise   . The step size coefficient λ is chosen as λ = 1 2 λ max ( B T Q 1 B ) , where λmax( ) denotes the maximum eigenvalue. The value βk in the last iteration is taken as the final estimate, namely β ^ = β k . Note that in Equation (9), both 2 μ B T Q 1 B and 2 μ B T Q 1 y do not change across iterations, so they just need to be calculated once and stored.
We follow a trial-and-error strategy to select an appropriate hyperparameter, namely the regularization parameter μ in Equation (2). For each trial value of μ, we calculate the following GCV value:
GCV ( μ ) = m RSS ( μ ) ( m p ) 2 .
In the above, the Residual Sum Squares (RSS) term is calculated as RSS ( μ ) = [ y B β ^ ( μ ) ] T Q 1 [ y B β ^ ( μ ) ] . The length of the measurement vector y is denoted as m, and the number of nonzero elements in the solution is denoted as p. We deliberately introduce μ as an argument in the corresponding variables above in order to explicitly show the dependence of these variables on μ. The chosen μ is the one that minimizes GCV, namely μ ^ = arg min μ GCV ( μ ) . Equation (10) is the well-known GCV [32,33]; however, it is tailored to the Lasso problem. This tailoring is permitted by the following property of the Lasso problem revealed by [34]: the number of the nonzero parameters in the Lasso solution is an unbiased and asymptotically consistent estimate of the effective number of free parameters of the Lasso estimation problem.

2.4. Variance and Covariance Analysis

The error covariance matrix Q is often inaccurate, overestimating the data accuracy rather than underestimating it. We can simply scale the covariance matrix with a scalar value to account for this problem. This scalar is often called a variance component, which is denoted as σ2. Therefore, ideally, we should replace Q with σ2Q. However, the introduction of this variance component does not affect the solution as shown in the above because it has already been absorbed into the tunable regularization parameter. Taking the equivalent DDK filtering matrix F0 in Equation (6) as an example, the error covariance matrix Q therein can be flexibly scaled with a regularization parameter (without using a variance component). However, in some accuracy assessments, this variance component is needed. The variance component and the regularization parameters can be estimated simultaneously with variance component estimation approaches [35,36]. However, we estimate the regularization parameter and the variance component separately. This is rational because the variance component does not affect the estimation of the SH parameters. After an appropriate regularization parameter is selected, as discussed in the above, we treat it as known; we then provide two posterior estimates for the variance component. The variance component is estimated for a model shown in Equation (3).
The first estimate is as follows:
σ ^ 1 2 = RSS + Reg m + n m = RSS + μ W ¯ x ^ 1 m ,
where Reg represents the regularization term, n is the length of the filtered solution x ^ . This estimate is based on the assumption that the constraints implied by the regularization term are viewed as pseudo measurements. Therefore, besides the m measurements, we also have n pseudo measurements. Then, we can further view the regularization solution as a least-squares solution using both measurements and pseudo measurements. Therefore, the above general formula is nothing but the standard posterior estimator of the single variance component in a least-squares adjustment. When applying this general formula to our specific DDK filtering, m = n.
The second estimate is as follows:
σ ^ 2 2 = RSS m p .
where p represents the number of nonzero elements in the filtered solution x ^ . This estimate is obtained by considering the findings in [34], which is already introduced above. We failed to find some relations between the two theoretically. We will compare them in our numerical studies presented in the following.
In addition to the above variance component, the error covariance matrix of the filtered variables may also be needed in some situations. Two approaches can be followed to provide such a covariance matrix. The first is obtained from a Bayesian viewpoint, namely viewing the filtered solution as the maximum a posteriori probability (MAP) estimate. We rearrange the original parameter vector as x = [ x A x B ] , with the upper part being nonzero and the lower part being zero in the solution. Accordingly, we have x ^ = [ x ^ A x ^ B ] = [ x ^ A 0 ] . The uncertainty of the solution is as follows:
{ Prob ( x B = 0 ) = Prob ( x B = x ^ B ) = 1 Pr ob ( x B 0 ) = 1 x A N ( x ^ A , σ ^ 2 R ¯ Q z ^ A z ^ A R ¯ T ) .
where Q z ^ z ^ = ( R ¯ T Q 1 R ¯ ) 1 = [ Q z ^ A z ^ A Q z ^ A z ^ B Q z ^ B z ^ A Q z ^ B z ^ B ] denotes the covariance matrix of the transformed solution z ^ = [ z ^ A z ^ B ] = [ z ^ A 0 ] , N( ) denotes a normal (Gaussian) distribution. This is correct up to the second-order terms in the expression for the logarithm of the posterior density. The reader can find a detailed derivation in Supplementary Text S1 in the Supplementary Material file.
In the second approach, the covariance matrix is obtained simply according to the error propagation law (EPL):
Q x ^ x ^ = σ ^ 2 F 0 Q F 0 T .
This kind of covariance, which accounts for formal errors only, tends to overestimate the accuracy. One of the reasons is that only the commission error is propagated while the omission error is neglected in this propagation. In assessing the accuracy of a functional calculated from a filtered SH model, we should better use the covariance in Equation (13). If the filtered SH model is treated as a prior in another SH modeling problem, we should better use the covariance Equation (14) to better represent the data-only statistical information.

3. Properties of the Proposed Sparse DDK Filter

3.1. A Discussion on Relevant Regularization Strategies

The regularizations mentioned in this work can be classified into three groups, namely the L0-norm, the L1-norm, and the L2-norm regularizations. Their relations are briefly discussed here. Figure 1 intuitively summarizes the discussion.
The L2-norm regularization: The signal covariance in the DDK filter in [5] can be viewed as an application of Kaula’s rule of thumb in its generalized version. The degree variance of the signal follows a power-law model, but with a power index empirically chosen to fit a prior geophysical model. Kaula’s rule of thumb is viewed as having a physical background by many. This is true only when we consider Kaula’s rule of thumb in its generalized version. The specific power index in the generalized Kaula’s rule of thumb can depend on the time, the reference model, and other factors. If this power index is not tailored, the results can be inferior even to naïve Tikhonov regularization, namely with the identity matrix taken as the regularization matrix [37,38].
The L0-norm regularization: Simply by setting the regularization parameter equal to two in the BSS problem, we can obtain the Akaike information criterion (AIC) [40]. For this, just recall that the L0-norm (corresponding to BSS) of a vector is exactly the number of nonzero elements in this vector; and the nonzero elements exactly define the selected model (corresponding to AIC). Therefore, we can say that the model selection by AIC can be viewed as a special case of the BSS problem, or that the BSS can be viewed as a generalization of model selection with AIC. Both BSS and AIC are combinatorically non-convex, which is the reason why people resort to approximations.
The L1-norm regularization: One relation between the L1-norm and the L0-norm regularizations is that the adaptive Lasso with L1-norm regularization can be viewed as a convex approximation to the non-convex BSS with L0-norm regularization, as already mentioned in above. In fact, the weighted Lasso employed in this work can also be viewed as an approximation to BSS [42]. One relation between the L1-norm and the L2-norm regularizations is that the former can be viewed as a data-adaptive version of the latter, as is already emphasized above. This adaptation stands out automatically from a purely statistical definition, as follows from the Lasso theory [20].

3.2. A Summary of the Properties of the Proposed Sparse DDK Filters

In a nutshell, the proposed DDK filter is time- and space-varying, azimuthally anisotropic, data-adaptive (independent of prior models), and sparse. The proposed filter is compared to other DDK-type filters in terms of these properties. The candidate filters included in this comparison are shown in Table 1. Some of the properties, e.g., spatial homogeneity and azimuthal isotropy, are more meaningful in the original spatial domain (namely, on the sphere) than in the transformed domain (namely, the spectral domain). Therefore, it is beneficial to give the corresponding space-domain filtering kernels of the spectral-domain filtering matrices. For any relevant vector and matrix, we always stick to the ordering rule of the degrees/orders in the SH coefficients vector x. Let the vector constituting the fully normalized spherical harmonics at location P be denoted as yP. Then, the filtering kernel is as follows:
f P P = y P T F 0 y P .
Therefore, the values of a gravity functional at location P before and after the filtering are related as follows:
T ^ P = Ω f P P T P d ω .
The properties of the filters under consideration can be summarized as follows:
First, the proposed filter is time-variable, namely varying from month to month. This property is shared with almost all the other filters. The temporal variations of a filter are due to variations of the error covariances and/or the signal covariances. Almost all DDK-type filters can employ time-variable error covariances, though some stationary-filtered solutions (e.g., DDK1, DDK2) are still released. The signal covariances in [3,4,5] are time-invariable, while the signal covariances in [6] and in this work are time-variable.
Second, the proposed filter is spatially inhomogeneous, i.e., with filtering kernels varying from location to location. This property is shared by all DDK-type filters. This is due to the location-dependent properties of the error covariances. Note that a spatially-inhomogeneous filter is non-symmetric [3].
Third, the proposed filter is anisotropic, i.e., the filtering kernels are azimuth-dependent. The anisotropy corresponds to the dependence of a diagonal filtering matrix on order in the spectral domain. Anisotropy is required from any filter that is able to destripe a solution, simply because the striping errors are anisotropic. Therefore, this property is shared by all DDK-type filters.
Fourth, the proposed filter is data-adaptive, being independent of prior models. This property is shared with [3,6]. The adaptivity is mainly due to the data-driven estimates of signal covariances.
Finally, the proposed filter is solution-sparse, which is shared by none of the others. Note that a filter itself is also sparse, i.e., some of its coefficients in the spectral domain are zeros. It is also possible to make the proposed filter sparse, following the approximations in [4].
It is also interesting to check the other type of decorrelation filters, namely the empirical decorrelation (ED) filters [8,9], in terms of the properties considered here. First, the ED filter is apparently anisotropic, which is necessary for destriping. It is time-variable, because it is constructed from the time-variable data, namely the SH coefficients to be filtered. From this, it also follows that it is data-adaptive. The error and signal covariance matrices are not relevant in ED filters. In our understanding, a complete neglection of covariances, especially the error covariances, is a disadvantage of ED filters. We see this as a waste of available information. ED filters are also spatially inhomogeneous and sparse (in terms of filter coefficients). However, they are generally not sparse in terms of the resulting solutions.

3.3. Demonstration of the Properties of the Sparse DDK Filter

The CSR RL05 solutions supplied with full error covariance matrices (max degree 60) for April 2004 and April 2010 are picked up to demonstrate the properties of the proposed sparse DDK filter. The covariance matrices are readily available for download from http://download.csr.utexas.edu/outgoing/grace (accessed on 12 April 2022). The reason we do not use the latest RL06 data is because, currently, CSR only provides the full error covariance matrices of RL05 solutions. We use the three-monthly solutions to devise their corresponding sparse DDK filters, with regularization parameters selected as 5.0 × 106, 7.5 × 105, and 2.5 × 106, respectively. Figure 2 presents their normalized smoothing kernels in the east–west and north–south directions, where the kernel center is located at different latitudes (0°, 30°, 60°) along the 0° meridian. The column-wise comparison demonstrates that sparse DDK is time-variable, whereas the row-wise comparison proves its spatial inhomogeneity. On the other hand, the E–W and N–S directions show different filtering strengths, which implies that the sparse DDK filter is anisotropic. Furthermore, we note the negative side-lobes, which help to reduce signal leakage, when compared to the monotonic Gaussian filtering [5].
Next, the monthly solutions under consideration are filtered using the corresponding sparse DDK filters presented above. Figure 3 presents the distribution of zero and nonzero elements in the filtered monthly solutions, clearly showing a sparsity of the filtered solutions. It is well-known that gravity signals are mainly concentrated in low-degree and low-order terms. We can observe in Figure 3 that those terms are well retained, while the high-degree and high-order terms, which are mainly dominated by noise, are set to zero. This result completely satisfies the original intention of filter design: to suppress high-degree noise while preserving low-degree signal. In addition, we also notice that the maximum degrees of the filtered solutions are lower than that of unfiltered ones (i.e., 60), indicating thata the sparse DDK filter reduces the spatial resolution of the solutions. However, this reduction in the nominal resolution is beneficial. Let us explain this using the following simple example. Assume that a high-degree/order SH coefficient before filtering is equal to 1, with 0.1 of it being the signal and 0.9 being noise. Considering the fact that it is impossible for a filter to completely separate the signal from noise at the same spectral point, it is more reasonable to discard this coefficient than to retain it. In this regard, L1-norm regularization can perform this discarding or truncation flexibly while the L2-norm Tikhonov regularization cannot. Furthermore, it has been proved in [25,43] that the L1-norm minimization may yield better results than the L2-norm minimization considering the SH noise is non-Gaussian due to relevance therein. It is similar to the simple truncation of the spherical harmonic series; however, this sparse regularization-based “truncation” is data-driven or adaptive and, hence, wiser. It can be seen that there are also some zero coefficients at low or medium degrees/orders. This should not be a surprise, recalling that the coefficients represent a residual signal with a reference signal removed. These zero coefficients merely mean that the corresponding terms have been approximated by the reference model with sufficient accuracy. In this regard, we can expect an even sparser solution if a better reference model is used.

4. Results

The 7-year time-series of GRACE monthly solutions with corresponding covariance matrices [44], ranging from January 2004 to December 2010, is used for evaluating the performance of the proposed sparse DDK filter. The regularization parameter μ of each month, selected by GCV, ranges from 5 × 105 to 7.5 × 106. The following seven filters are compared: Gaussian 300-km smoother (denoted as G300), Swenson and Wahr (SW) empirical decorrelation filter (denoted as Swen), SW filtering + Gaussian 300-km smoothing (denoted as S300), DDK2, DDK3, DDK4 filters and the proposed sparse DDK (denoted as Spar) filter. The three variants of DDK filters, namely DDK2, DDK3 and DDK4, are selected for comparison since we find that their filtering strengths are comparable to that of the sparse DDK. In addition, the CSR RL06 v2.0 mascon solutions are also included for a comparison, considering they do not need filtering and can derive well-localized mass anomaly estimates [12,45]. We calculate the filtered mass anomaly using the filtered solutions with the C20 coefficient replaced with that derived from satellite laser ranging (SLR) [46], and with the degree-1 coefficients estimated by [47] added. The glacial isostatic adjustment (GIA) effect is corrected using the ICE-6G_D model [48].
In the following Section 4.1, we compare the filtered mass anomalies of the seven filters for both global and local cases and give their uncertainty estimates. In Section 4.2, we show the signal retaining rates and noise levels of each filtered solutions series. Finally, we evaluate the sparsity of the sparse DDK filtered solution and its signal retainment in polar areas.

4.1. Filtered Mass Anomalies Analysis

The mass anomaly, namely surface density ξP, at a location P, in units of EWH, is computed as follows [49]:
ξ P ( θ , λ ) = a ρ a v e 3 2 l + 1 1 + k n n = 2 L max m = 0 n [ Δ C n m cos ( m λ ) + Δ S n m sin ( m λ ) ] P ¯ n m ( cos θ ) = g P T x ,
where gp is the spectral conversion matrix, which converts the spherical harmonic coefficients vector x into the surface mass anomaly.
We still select monthly solutions from April 2004 and April 2010 for illustration. The global distribution of EWHs before and after filtering is shown in Figure 4 for the selected months. It is observed that in all seven filtered solutions, we can identify significant geophysical signals. The G300 smoothing can effectively reduce noise, but some residual stripes still exist, especially in ocean areas. The Swen empirical decorrelation can completely de-stripe the solution; however, there is still some high-frequency noise at middle and low latitudes. The S300 filter can handle both high-frequency noise and stripes efficiently, but there is some signal attenuation in high latitudes such as Greenland and Antarctica. In DDK2, DDK3, DDK4, and sparse DDK filtered solutions, the suppression of high-frequency noise and stripes is similar visually. We also notice that the smoothing effect of the sparse DDK is closer to that of the DDK3 filter.
To further evaluate the filtered local mass anomalies of the sparse DDK filter, we selected six study areas: Amazon Basin, Congo Basin, Ganges Basin, Yangtze Basin, Alaskan glaciers (defined in Figure S1 in Supplementary Material file), and Greenland. The surface mass densities shown in Equation (17) are integrated over a study area to calculate the total mass anomaly ζ in this area, as follows:
ζ = Ω ξ P d ω P = Ω g P T x ^ d ω P = [ Ω g P T d ω P ] x ^ [ i g i T Δ i ] x = h T x ^ .
In the above, Δi denotes the area of the i-th grid cell in the sum-approximation of the integral. Accordingly, the formal variance of the total mass anomaly is computed through error propagation, as follows:
σ ζ 2 h T Q x ^ x ^ h .
where Q x ^ x ^ denotes the covariance matrix of the filtered solution.
In the following, we will compare the formal error StD (standard deviation) of the filtered regional mass anomaly of the sparse DDK and the conventional DDK filters (DDK2, DDK3 and DDK4). For the sparse DDK filter, there are four options to calculate Q x ^ x ^ in Equation (19), namely with Equation (11)/Equation (13), with Equation (11)/Equation (14), with Equation (12)/Equation (13), or with Equation (12)/Equation (14). For easy reference, we call the corresponding variance-covariance estimates Q x ^ x ^ “MAP with σ1”, “EPL with σ1”, “MAP with σ2” and “EPL with σ2”, respectively. For the conventional DDK filters, the posterior estimate of the variance component is computed according to the leftmost equation of Equation (11). The formal covariance matrix of a filtered solution from an error propagation point of view is given by Equation (14), while that from a Bayesian point of view is as follows:
Q x ^ x ^ = σ ^ 2 ( Q DDK 1 + μ S DDK 1 ) 1 = σ ^ 2 [ F DDK Q DDK ]
where FDDK is the block-diagonal filtering matrix, directly provided in conventional DDK filters. The diagonal signal covariance matrix SDDK, as a scaled variant of the matrix W introduced in Section 2.2, is usually provided in the form of a given power index and regularization parameter. The involved time-invariable error covariance matrix QDDK is recovered from FDDK and SDDK, based on the following relations:
F DDK = ( Q DDK 1 + μ S DDK 1 ) 1 Q DDK 1
Figure 5 shows the regional total mass anomaly series (the left panel) for the selected six study areas, as well as their differences with respect to the mascon solutions (cf. the right panel), of which the root mean square differences (RMSD) are presented in Table 2. In general, all the filtered solutions produce similar mass anomaly estimates, and they agree well with the CSR RL06 v2.0 mascon solution. The mass anomaly differences (with mascon as reference) of the sparse DDK filtered solution are as close as those of the other DDK filters. In Alaskan glaciers and Greenland, the S300 filter and Swen filter show significant differences from the other filtered solutions and mascon solution because of stronger signal smoothing by the involved empirical decorrelation filter at middle and high latitudes. In Greenland, the Spar filtered solutions differ from the mascon solution (but are consistent with the other DDK solutions), which may result from differences in leakage errors. This proves the proposed sparse DDK does not show severe significant degradation, although the high-degree terms are forced to zeros when moving to high latitudes.
Figure 6 presents the formal error StD of regional mass anomalies for sparse DDK, DDK2, DDK3, and DDK4 filters. The following observations are drawn. First, since the variance component σ1 in Equation (11) is almost the same size as σ2 in Equation (12) (the left panel in Figure 5), we can adopt either of them to scale the error covariance matrix of the filtered solution. Second, the error StDs derived from EPL in Equation (14) are always lower than those from MAP in Equation (13). This is because EPL only propagates measurement errors, while modelling errors are neglected. In other words, the uncertainty introduced by prior information (e.g., from regularization) is ignored by EPL. Third, the error StDs in the sparse DDK filter are much larger than those in DDK2, DDK3, and DDK4 filters. This may be due to missing some covariance components for DDK2, DDK3, and DDK4 when we recover the covariance matrix QDDK in Equation (21) from the block-diagonal FDDK and signal covariance matrix SDDK. Finally, it is difficult to assess whether the formal error StD is too optimistic or pessimistic because we do not know the true regional mass anomalies. However, we still made an approximate verification using the mascon mass anomalies as the truth. Table 3 lists the mean formal error StDs of four calculation schemes for the six areas. Compared with RMSD of sparse DDK in Table 2, we find that the formal error StDs from both MAP and EPL are comparable to those derived from the differences between sparse DDK solutions and mascon solutions in four cases (Amazon, Congo, Ganges, Yangtze) out of six; the discrepancies are small. For Alaskan glaciers and Greenland, the formal error StDs are still over-optimistic. This may be due to smaller signal leakage of the mascon solutions than that of sparse DDK filtered solutions in these regions, thus making the mass anomaly differences between them greater, which cannot be captured by the formal errors.

4.2. Signal and Noise Level Analysis for Filtered Solutions

First, the signal and noise levels of the filtered solutions in the spectral domain are assessed. Figure 7 presents the geoid degree-variances of the filtered solutions for the three selected months. According to [50,51], the degree-variances of the first 30 degrees mainly reflect the signal amplitude of the time-varying gravity field, while those after degree 30 mainly show the noise level. In our experience, such a statement is probably a bit too optimistic. It would be fairer to say that the range between degrees 20 and 30 is a transition zone, where contributions of noise and signal are similar. From this viewpoint, the S300 filter retains the least time-varying gravity signal, though its noise level is also low. The other filters do not show much difference in signal preservation (the first 30 degrees), but this is converse for noise suppression (after degree 30). It is impressive that compared with the other six filters, the sparse DDK can retain comparable gravity signals before the first 30 degrees while suppressing the noise after 30 degrees significantly. This mainly benefits from the high-degree and high-order sparsity brought by the weighted L1-norm regularization.
Then, the signal and noise levels of filtered solutions are quantitatively analyzed in the spatial domain, both at selected locations and globally. In doing so, we derive a mass anomaly time-series at the Earth’s surface, taking into account the Earth’s oblateness [52]. The selected locations are five lakes: the Caspian Sea, Ladoga, Nasser, Tanganyika, and Victoria. The mean mass anomalies over those lakes (in terms of EWH) are compared with water level variations extracted from altimetry data. Annual variations are eliminated in the course of the comparison since they may contain noticeable nuisance signals (e.g., nuisance hydrological signals in the GRACE data and steric water level variations in altimetry data). For a similar reason, long-term linear trends are eliminated in the case of the Ladoga and Nasser lakes. Technically, the comparison is performed using a least-squares estimation of unknown coefficients C1, …, C5, E, and, optionally, C6, in the following equations:
C 1 + C 2 sin ω t i + C 3 cos ω t i + C 4 sin 2 ω t i + C 5 cos 2 ω t i + { C 6 t i } optional + E h ( t i ) = d ( t i )
where ω = 2 π 1 yr ; ti is the time in the middle of the i-th month, h(t) is the altimetry-based time series of water level variations in the given lake, and d(t) is the GRACE-based time series of mean mass anomalies in the same lake. The post-fit residuals are interpreted as noise in GRACE mass anomaly time-series and reported in terms of noise StD.
The scaling factor E reflects the signal damping in a given GRACE-based time series. It is used to compute the quantity ε, which is called thereafter “relative signal retaining”:
ε = E / E exp .
The “expected signal retaining” Eexp in this expression reflects the signal damping caused solely by the truncation of the spherical harmonic series at Lmax = 60. Thus, the relative signal retaining ε contains information about the effect of filtering in the context of a particular lake. In the case of ideal noise-free input data, ε is equal to 1 if filtering is not applied and reduces as the applied (low-pass) filter becomes increasingly aggressive. Further details concerning this comparison can be found in [53].
The obtained estimates of the noise StD and relative signal retaining for each of the eight solution time-series are shown per lake in Figure 8. Furthermore, Table 4 (the 2nd and 3rd column) reports the mean values of the estimated noise StD and weighted mean values of the relative signal retaining, respectively, which are obtained by averaging over all five lakes. One can see that in most cases, the more aggressive a filter is (i.e., the lower the relative signal retaining), the lower is the resulting noise level. For instance, the S300 filter, which is apparently the most aggressive one, yields the lowest noise StD and the lowest relative signal retaining, whereas the DDK4 filter, being the least aggressive one, results in a relatively high noise level (when the mean values over the five lakes are considered). The sparse DDK filter looks more aggressive than DDK3 but less aggressive than DDK2. To be specific, the mean values of both relative signal retaining and noise StD are in-between the corresponding values of the DDK2 and DDK3 filters.
We also estimate the accuracy of a filtered solution time-series globally at the nodes of a global equiangular 1° × 1° grid. Noise StD is estimated at each grid node separately, using the procedure proposed by [13]. That is, a regularized time series is computed at each node by minimizing the following objective function:
Φ [ x ] = 1 σ n 2 i ( x i d i ) 2 + 1 σ s 2 Ω [ x ] ,
where di is the original mass anomaly in the i-th month, xi is the regularized mass anomaly in the same month (to be estimated), σ n 2 is an unknown noise variance, σ s 2 is an unknown signal variance, and Ω[x] is a regularization functional. The goal of the regularization functional in Equation (24) is to ensure that the regularized mass anomaly time-series x(t) at each grid node is sufficiently realistic so that the deviations of the data from those time-series can be exploited for an accurate estimation of noise variances. A point of special concern is an excessive damping of the original time series, which may result in an overestimation of noise levels. To mitigate this effect, we have applied a tailored regularization, which takes into account that mass anomaly time-series are typically characterized by an annual periodicity and/or long-term trends. Originally, a regularization of this type was introduced by [13], who proposed to minimize year-to-year differences between the values of the first time-derivative of the mass anomaly time series. It was proved that such a regularization functional does not penalize annual periodic signals and linear trends at all so that at least these signal components are not subject to any damping. In this study, we have adopted a modified variant of that functional, which minimizes year-to-year differences between the values of the second time-derivative of the mass anomaly time series. Assuming that this time series is represented as a continuous function of time x(t) in the time interval [tbeg; tend], the exploited regularization functional can be written as:
Ω [ x ] = t beg + 1 yr t end [ x ¨ ( t ) x ¨ ( t 1 yr ) ] 2 d t
where x ¨ ( t ) stands for the second derivative of x(t). This type of regularization was called a “minimization of Month-to-month Year-to-year Triple Differences” (MYTD) in [53], where it was originally proposed. This type of regularization yields better results than the minimization of year-to-year differences between the values of the first time-derivative, which was proposed in the aforementioned publication by [13]. The unknown noise variance σ n 2 and signal variance σ s 2 are estimated in parallel with the minimization of the objective function, using the Variance Component Estimation (VCE) method [36]. Only the former value is of interest in this study since it can directly be used to estimate the noise StD σn at the given grid node.
The obtained estimates of noise StD for the eight solution time-series under consideration are shown as maps in Figure 9. The global RMS values of the obtained estimates are reported in the last column of Table 4. One can see that the global estimates of noise level rank the solutions basically in the same way as the lake tests. For instance, the lowest noise level is observed after applying a relatively aggressive S300 filter, whereas the DDK4 filter yields a higher noise level than any other filter of the “DDK” type. The proposed sparse DDK filter results in a noise level that is lower than after the DDK3 filtering but higher than after the DDK2 filtering.

5. Discussion

Figure 10 shows the sparsity ratio and the maximum SH degree of sparse DDK filtered solutions from January 2004 to December 2010. The sparsity ratios are between 14.1% and 36.5%, while the maximum degrees are between 34 and 52. This sparsity significantly suppresses the high-degree and high-order noise, as presented in Figure 7.
It is well-known that GRACE can sense mass anomalies near the poles much better than in the rest of the world. This implies that a monthly solution complete to a relatively high degree is needed in order to fully exploit information contained in the SH coefficients when mass anomalies in polar areas are estimated. However, our sparse DDK filter may lose some of the high-frequency signals when discarding the high-degree noises. Therefore, it is necessary to check the signal loss of the filtered solutions in polar areas. The six study areas in Section 4.1, located in low, middle, and high latitudes, are taken as examples here. Considering the noise (or stripes) has been largely eliminated visually, as shown in Figure 4, we can approximately assume that the filtered solution mainly consists of the signal. In the following, the retained signals within these areas are reported as the annual amplitude of the filtered solutions.
The average annual amplitude for the six areas is summarized in Table 5. We find that the DDK-type filters always provide higher annual amplitudes than the other filters. In addition, the values of the mascon solutions are similar to those of the DDK filters. In most cases (except Greenland), the average annual amplitudes of sparse DDK filtered solutions are still between those of DDK2 and DDK3, while the largest amplitudes are shown by the DDK4 solutions. This reveals that all the DDK filters show similar behavior. In addition, we also notice that the annual amplitude of sparse DDK filtered solutions tends to become weaker at higher latitudes. Over Greenland, for instance, they are by 5~17% weaker (4.0 versus 4.3, 4.7, 4.8, and 4.2 cm), compared to the other DDK filtered solutions and G300 smoothing, but still stronger than those of Swen and S300 filters.

6. Conclusions

Focusing on decorrelation and denoising for GRACE time-variable gravity field solutions, the contributions of this work include: (1) An alternative filter called sparse DDK is devised by using weighted L1-norm regularization; (2) the conventional mainstream filters are checked and compared from their properties, including time-variability, location-inhomogeneity, anisotropy, independence from the prior model, sparsity, etc., and the relevant regularization strategies, including L0, L1 and L2 regularizations are discussed. The sparse DDK filter ensures that the obtained solutions are sparse, i.e., that most of the high-degree and high-order SH coefficients are zeros. This means that strong noise, which may originate from these coefficients, is suppressed. The performance of the proposed sparse DDK filter and other mainstream filters was compared using real GRACE level-2 data. The results show that the proposed sparse DDK can remove stripes and high-frequency noise efficiently. The annual amplitudes and noise StD based on both point-wise estimates and on average values for selected regions show that the sparse DDK filter yields signal and noise levels that are comparable to those from the traditional DDK filters. The time-series of total mass anomalies in the selected research areas also indicate that the sparse DDK filter can recover mass anomalies similarly to other DDK filters. Finally, the following two points should be noted:
First, the filtering matrix F in sparse DDK can also be simplified into block-diagonal form, as proposed by [4], which can be achieved by retaining only the elements that correspond to SH coefficients of the same order in the error covariance matrix. On the other hand, one can also modify the conventional DDK filter, which is based on a stationary error covariance matrix and signal covariance matrix, into a sparse DDK filter by directly replacing the Tikhonov regularization with the weighted L1-norm regularization. In addition, the regularization parameter involved in sparse DDK are selected by GCV, which is believed to correspond to the optimal tradeoff between signal retainment and noise suppression. Of course, one can also adjust the regularization parameter of sparse DDK freely if they need a different filtering strength.
Second, for the sparse DDK filter, high-degree and high-order sparsity means that the maximum degree of the filtered solution is reduced, implying that the spatial resolution of the filtered solution is reduced. However, this reduction does not cause explicit loss of mass anomaly signal, especially at middle and low latitudes (5~17% signal loss in terms of annual amplitude for high latitudes taking Greenland as an example). Note that the GRACE 60-degree monthly solutions are just adopted in this work as an example. Considering that many data centers nowadays release higher 90-degree (e.g., GFZ, JPL) or 96-degree (e.g., CSR, Tongji) monthly time-varying gravity field solutions, it can be expected that after filtering with sparse DDK, the maximum degree of the filter solutions can reach approximately 60 to 80, which can significantly improve the loss of spatial resolution compared with using 60-degree data. In this case, compared to reserving higher-degree noise, it is worthwhile to discard higher-degree tiny signals, and the negative effects of high-degree signal loss can also be further reduced.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs14122810/s1, Text S1: Derivation for VCM of Lasso solution; Figure S1: Definition for Alaskan glaciers.

Author Contributions

Conceptualization, G.C.; methodology, G.C. and N.Q.; software, N.Q. and P.D.; validation, P.D., Z.W.; formal analysis, N.Q.; investigation, N.Q.; resources, J.G.; data curation, N.Q.; writing—original draft preparation, N.Q., G.C. and P.D.; writing—review and editing, P.D.; visualization, N.Q. and P.D.; supervision, J.G.; project administration, G.C. and J.G.; funding acquisition, G.C. and J.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work is sponsored partially by the National Natural Science Foundation of China (Grant Nos. 42074001, 41974026, and 41774005), partially by the China Postdoctoral Science Foundation (Grant Nos. 2019M652010 and 2019T120477), partially by the Postgraduate Research and Practice Innovation Program of Jiangsu Province (Grant No. KYCX21_2292) and partially by the Postgraduate Innovation Program of China University of Mining and Technology (Grant No. 2021WLKXJ098).

Data Availability Statement

The CSR GRACE monthly gravity solutions and error covariance matrices used in the study are available via http://download.csr.utexas.edu/outgoing/grace (accessed on 12 April 2022). The time-series of water level variations in lakes are based on Jason-1, Jason-2, and Jason-3 satellite altimetry data (courtesy of the USDA/NASA G-REALM program at https://ipad.fas.usda.gov/cropexplorer/global_reservoir/) (accessed on 12 April 2022). The GRACE MATLAB Toolbox (GRAMAT) software is used for the calculation of mass anomaly [54]. The basin boundaries are provided at http://hydro.iis.u-tokyo.ac.jp/~taikan/TRIPDATA/TRIPDATA.html (accessed on 12 April 2022). Geometry of the considered lakes was generated with Generic Mapping Tools (GMT) software [55], which makes use of the Global Self-consistent, Hierarchical, High-resolution Geography database (GSHHG) at https://www.soest.hawaii.edu/pwessel/gshhg (accessed on 12 April 2022). GMT software was also used to create the plots.

Acknowledgments

Jurgen Kusche is acknowledged for insightful discussions on regularization methods in gravity data processing during Chang’s visit to the University of Bonn. The support provided by the China Scholarship Council (CSC) during Qian’s visit to Delft University of Technology is also acknowledged. We would like to thank Wei Feng for providing the GRACE MATLAB Toolbox (GRAMAT) [54], Roelof Rietbroek for making his codes on DDK filtering available and Taikan Oki for providing basin boundaries. Gratitude is also extended to the Centre for Space Research (CSR) for providing GRACE monthly solutions and corresponding covariance matrices.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ran, J.; Ditmar, P.; Klees, R.; Farahani, H.H. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach. J. Geod. 2018, 92, 299–319. [Google Scholar] [CrossRef] [PubMed]
  2. Shugar, D.; Jacquemart, M.; Shean, D.; Bhushan, S.; Upadhyay, K.; Sattar, A.; Schwanghart, W.; McBride, S.; de Vries, M.V.W.; Mergili, M. A massive rock and ice avalanche caused the 2021 disaster at Chamoli, Indian Himalaya. Science 2021, 373, 300–306. [Google Scholar] [CrossRef] [PubMed]
  3. Klees, R.; Revtova, E.; Gunter, B.C.; Ditmar, P.; Oudman, E.; Winsemius, H.C.; Savenije, H.H.G. The design of an optimal filter for monthly GRACE gravity models. Geophys. J. Int. 2008, 175, 417–432. [Google Scholar] [CrossRef]
  4. Kusche, J.; Schmidt, R.; Petrovic, S.; Rietbroek, R. Decorrelated GRACE time-variable gravity solutions by GFZ, and their validation using a hydrological model. J. Geod. 2009, 83, 903–913. [Google Scholar] [CrossRef]
  5. Kusche, J. Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geod. 2007, 81, 733–749. [Google Scholar] [CrossRef]
  6. Horvath, A.; Murböck, M.; Pail, R.; Horwath, M. Decorrelation of GRACE time variable gravity field solutions using full covariance information. Geosciences 2018, 8, 323. [Google Scholar] [CrossRef]
  7. Save, H.; Bettadpur, S.; Tapley, B.D. Reducing errors in the GRACE gravity solutions using regularization. J. Geodesy 2012, 86, 695–711. [Google Scholar] [CrossRef]
  8. Swenson, S.C.; Wahr, J. Post-processing removal of correlated errors in GRACE data. Geophys. Res. Lett. 2006, 33. [Google Scholar] [CrossRef]
  9. Duan, X.; Guo, J.; Shum, C.K.; Van Der Wal, W. On the postprocessing removal of correlated errors in GRACE temporal gravity field solutions. J. Geodesy. 2009, 83, 1095–1106. [Google Scholar] [CrossRef]
  10. Zhang, Z.-Z.; Chao, B.F.; Lu, Y.; Hsu, H.-T. An effective filtering for GRACE time-variable gravity: Fan filter. Geophys. Res. Lett. 2009, 36. [Google Scholar] [CrossRef]
  11. Crowley, J.W.; Huang, J. A least-squares method for estimating the correlated error of GRACE models. Geophys. J. Int. 2020, 221, 1736–1749. [Google Scholar] [CrossRef]
  12. Prevost, P.; Chanard, K.; Fleitout, L.; Calais, E.; Walwer, D.; van Dam, T.; Ghil, M. Data-adaptive spatio-temporal filtering of GRACE data. Geophys. J. Int. 2019, 219, 2034–2055. [Google Scholar] [CrossRef]
  13. Ditmar, P.; Tangdamrongsub, N.; Ran, J.; Klees, R. Estimation and reduction of random noise in mass anomaly time-series from satellite gravity data by minimization of month-to-month year-to-year double differences. J. Geodyn. 2018, 119, 9–22. [Google Scholar] [CrossRef]
  14. Piretzidis, D.; Sra, G.; Karantaidis, G.; Sideris, M.G.; Kabirzadeh, H. Identifying presence of correlated errors using machine learning algorithms for the selective de-correlation of GRACE harmonic coefficients. Geophys. J. Int. 2018, 215, 375–388. [Google Scholar] [CrossRef]
  15. Belda, S.; Garcia-Garcia, D.; Ferrandiz, J.M. On the decorrelation filtering of RL05 GRACE data for global applications. Geophys. J. Int. 2015, 200, 173–184. [Google Scholar] [CrossRef]
  16. Zhan, J.-G.; Wang, Y.; Shi, H.-L.; Chai, H.; Zhu, C.-D. Removing correlative errors in GRACE data by the smoothness priors method. Chin. J. Geophys. -Chin. Ed. 2015, 58, 1135–1144. [Google Scholar] [CrossRef]
  17. Wang, F.; Shen, Y.; Chen, T.; Chen, Q.; Li, W. Improved multichannel singular spectrum analysis for post-processing GRACE monthly gravity field models. Geophys. J. Int. 2020, 223, 825–839. [Google Scholar] [CrossRef]
  18. Eom, J.; Seo, K.-W.; Lee, C.-K.; Wilson, C.R. Correlated error reduction in GRACE data over Greenland using extended empirical orthogonal functions. J. Geophys. Res. -Solid Earth. 2017, 122, 5578–5590. [Google Scholar] [CrossRef]
  19. Sasgen, I.; Martinec, Z.; Fleming, K. Wiener optimal filtering of GRACE data. Studia Geophys. Geod. 2006, 50, 499–508. [Google Scholar] [CrossRef]
  20. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Society. Ser. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  21. Hastie, T.; Tibshirani, R.; Wainwright, M. Statistical Learning with Sparsity: The Lasso and Generalizations; CRC Press: London, UK, 2015. [Google Scholar]
  22. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  23. Burnham, K.P.; Anderson, D.R. Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach; Springer: New York, NY, USA, 2002. [Google Scholar]
  24. Qian, N.; Chang, G. Optimal filtering for state space model with time-integral measurements. Measurement 2021, 176, 109209. [Google Scholar] [CrossRef]
  25. Amiri-Simkooei, A. Formulation of L1 Norm Minimization in Gauss-Markov Models. J. Surv. Eng. 2003, 129, 37–43. [Google Scholar] [CrossRef]
  26. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. Siam J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  27. Zou, H. The adaptive Lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef]
  28. Chang, G.; Qian, N.; Chen, C.; Gao, J. Precise instantaneous velocimetry and accelerometry with a stand-alone GNSS receiver based on sparse kernel learning. Measurement 2020, 159, 107803. [Google Scholar] [CrossRef]
  29. Qian, N.; Chang, G.; Gao, J. Smoothing for continuous dynamical state space models with sampled system coefficients based on sparse kernel learning. Nonlinear Dyn. 2020, 100, 3597–3610. [Google Scholar] [CrossRef]
  30. Yang, L.; Chang, G.; Qian, N.; Gao, J. Improved atmospheric weighted mean temperature modeling using sparse kernel learning. GPS Solut. 2021, 25, 28. [Google Scholar] [CrossRef]
  31. Parikh, N.; Boyd, S. Proximal algorithms. Found. Trends Optim. 2014, 1, 127–239. [Google Scholar] [CrossRef]
  32. Golub, G.H.; Heath, M.T.; Wahba, G. Generalized Cross-Validation as a Method for Choosing a Good Ridge Parameter. Technometrics 1979, 21, 215–223. [Google Scholar] [CrossRef]
  33. Craven, P.; Wahba, G. Smoothing noisy data with spline functions. Numer. Math. 1979, 31, 377–403. [Google Scholar] [CrossRef]
  34. Zou, H.; Hastie, T.; Tibshirani, R. On the “degrees of freedom” of the lasso. Ann. Stat. 2007, 35, 2173–2192. [Google Scholar] [CrossRef]
  35. Kusche, J. A Monte-Carlo technique for weight estimation in satellite geodesy. J. Geod. 2003, 76, 641–652. [Google Scholar] [CrossRef]
  36. Koch, K.R.; Kusche, J. Regularization of geopotential determination from satellite data by variance components. J. Geod. 2002, 76, 259–268. [Google Scholar] [CrossRef]
  37. Xu, P.; Fukuda, Y.; Liu, Y. Multiple Parameter Regularization: Numerical Solutions and Applications to the Determination of Geopotential from Precise Satellite Orbits. J. Geod. 2006, 80, 17–27. [Google Scholar] [CrossRef]
  38. Qian, N.; Chang, G.; Gao, J.; Pan, C.; Yang, L.; Li, F.; Yu, H.; Bu, J. Vehicle’s Instantaneous Velocity Reconstruction by Combining GNSS Doppler and Carrier Phase Measurements Through Tikhonov Regularized Kernel Learning. IEEE Trans. Veh. Technol. 2021, 70, 4190–4202. [Google Scholar] [CrossRef]
  39. Breiman, L. Better subset regression using the nonnegative garrote. Technometrics 1995, 37, 373–384. [Google Scholar] [CrossRef]
  40. Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control. 1974, 19, 716–723. [Google Scholar] [CrossRef]
  41. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; John Wiley & Sons: New York, NY, USA, 1977. [Google Scholar]
  42. Donoho, D.L. For most large underdetermined systems of linear equations the minimal L1-norm solution is also the sparsest solution. Commun. Pure Appl. Math. 2006, 59, 797–829. [Google Scholar] [CrossRef]
  43. Khodabandeh, A.; Amiri-Simkooei, A. Recursive algorithm for L1 norm estimation in linear models. J. Surv. Eng. 2011, 137, 1–8. [Google Scholar] [CrossRef]
  44. Save, H.; Bettadpur, S.; Tapley, B.D. High-resolution CSR GRACE RL05 mascons. J. Geophys. Res. Solid Earth 2016, 121, 7547–7569. [Google Scholar] [CrossRef]
  45. Watkins, M.M.; Wiese, D.N.; Yuan, D.-N.; Boening, C.; Landerer, F.W. Improved methods for observing Earth’s time variable mass distribution with GRACE using spherical cap mascons. J. Geophys. Res.-Solid Earth 2015, 120, 2648–2671. [Google Scholar] [CrossRef]
  46. Cheng, M.; Tapley, B.D.; Ries, J.C. Deceleration in the Earth’s oblateness. J. Geophys. Res. Solid Earth 2013, 118, 740–747. [Google Scholar] [CrossRef]
  47. Swenson, S.; Chambers, D.; Wahr, J. Estimating geocenter variations from a combination of GRACE and ocean model output. J. Geophys. Res. Solid Earth 2008, 113, B8. [Google Scholar] [CrossRef]
  48. Peltier, W.R.; Argus, D.F.; Drummond, R. Comment on “An Assessment of the ICE-6G_C (VM5a) Glacial Isostatic Adjustment Model” by Purcell et al. J. Geophys. Res.-Solid Earth 2018, 123, 2019–2028. [Google Scholar] [CrossRef]
  49. Wahr, J.; Molenaar, M.; Bryan, F. Time variability of the Earth’s gravity field: Hydrological and oceanic effects and their possible detection using GRACE. J. Geophy. Res.-Solid Earth 1998, 103, B12. [Google Scholar] [CrossRef]
  50. Luthcke, S.B.; Rowlands, D.D.; Lemoine, F.G.; Klosko, S.M.; Chinn, D.; McCarthy, J.J. Monthly spherical harmonic gravity field solutions determined from GRACE inter-satellite range-rate data alone. Geophys. Res. Lett. 2006, 33. [Google Scholar] [CrossRef]
  51. Chen, Q.; Shen, Y.; Chen, W.; Zhang, X.; Hsu, H. An improved GRACE monthly gravity field solution by modeling the non-conservative acceleration and attitude observation errors. J. Geod. 2016, 90, 503–523. [Google Scholar] [CrossRef]
  52. Ditmar, P. Conversion of time-varying Stokes coefficients into mass anomalies at the Earth’s surface considering the Earth’s oblateness. J. Geod. 2018, 92, 1401–1412. [Google Scholar] [CrossRef]
  53. Ditmar, P. How to quantify the accuracy of mass anomaly time-series based on grace data in the absence of knolwedeg about true signal? J. Geod. 2022. under review. [Google Scholar]
  54. Feng, W. GRAMAT: A comprehensive Matlab toolbox for estimating global mass variations from GRACE satellite data. Earth Sci. Inform. 2019, 12, 389–404. [Google Scholar] [CrossRef]
  55. Wessel, P.; Smith, W.H.; Scharroo, R.; Luis, J.; Wobbe, F. Generic mapping tools: Improved version released. Eos Trans. Am. Geophys. Union 2013, 94, 409–410. [Google Scholar] [CrossRef]
Figure 1. Relations between different regularization schemes. Reg. = regularization; BSS = best subset selection [39]; AIC = Akaike information criterion [40]; Lasso = least absolute shrinkage and selection operator [20]; A-Lasso = adaptive Lasso [27]; TR = Tikhonov regularization [41]; DR = DDK regularization [4,5]. “A(Data-driven)B” = B can be viewed as a data-driven version of A; “A(Approximation)B” = B can be viewed as an approximated version of A; “A(Generalization)B” = B can be viewed as a generalized version of A. The schemes in the dotted blue box are the focus of the current study.
Figure 1. Relations between different regularization schemes. Reg. = regularization; BSS = best subset selection [39]; AIC = Akaike information criterion [40]; Lasso = least absolute shrinkage and selection operator [20]; A-Lasso = adaptive Lasso [27]; TR = Tikhonov regularization [41]; DR = DDK regularization [4,5]. “A(Data-driven)B” = B can be viewed as a data-driven version of A; “A(Approximation)B” = B can be viewed as an approximated version of A; “A(Generalization)B” = B can be viewed as a generalized version of A. The schemes in the dotted blue box are the focus of the current study.
Remotesensing 14 02810 g001
Figure 2. Cross-sections of normalized smoothing kernel of the proposed filter. Red: SN direction, Blue: WE direction. Top: April 2004; bottom: April 2010. Left: at (0°E, 0°N); middle: at (0°E, 30°N); Right: (0°E, 60°N).
Figure 2. Cross-sections of normalized smoothing kernel of the proposed filter. Red: SN direction, Blue: WE direction. Top: April 2004; bottom: April 2010. Left: at (0°E, 0°N); middle: at (0°E, 30°N); Right: (0°E, 60°N).
Remotesensing 14 02810 g002
Figure 3. Distribution of nonzero SH coefficients (blue squares) in the solutions obtained with the proposed filter. Left: April 2004; Right: April 2010. The sparsity rate is defined as the ratio between the numbers of nonzero parameters and all parameters.
Figure 3. Distribution of nonzero SH coefficients (blue squares) in the solutions obtained with the proposed filter. Left: April 2004; Right: April 2010. The sparsity rate is defined as the ratio between the numbers of nonzero parameters and all parameters.
Remotesensing 14 02810 g003
Figure 4. The filtered monthly solutions (in terms of EWH) of 7 filters for April 2004 (the first and the third column) and April 2010 (the second and the fourth column). The solutions before filtering (the first row) and CSR RL06 mascon solutions (the fifth row) are also presented for comparison.
Figure 4. The filtered monthly solutions (in terms of EWH) of 7 filters for April 2004 (the first and the third column) and April 2010 (the second and the fourth column). The solutions before filtering (the first row) and CSR RL06 mascon solutions (the fifth row) are also presented for comparison.
Remotesensing 14 02810 g004
Figure 5. Variations of filtered regional mass anomalies vs. time (left panel), and the corresponding differences with respect to the mascon solutions (right panel). From top to bottom are results for Amazon Basin, Congo Basin, Ganges Basin, Yangtze Basin, Alaskan glaciers, and Greenland. No leakage correction is made for the filtered solutions.
Figure 5. Variations of filtered regional mass anomalies vs. time (left panel), and the corresponding differences with respect to the mascon solutions (right panel). From top to bottom are results for Amazon Basin, Congo Basin, Ganges Basin, Yangtze Basin, Alaskan glaciers, and Greenland. No leakage correction is made for the filtered solutions.
Remotesensing 14 02810 g005
Figure 6. Formal error StD of regional mass anomaly versus time. Left panel: sparse DDK; right panel: DDK2, DDK3 and DDK4. From top to bottom are results for Amazon Basin, Congo Basin, Ganges Basin, Yangtze Basin, Alaskan glaciers, and Greenland. Note that the error covariance matrices QDDK in DDK1, DDK2 and DDK3 are recovered from block-diagonal filtering matrix FDDK and signal covariance matrix SDDK.
Figure 6. Formal error StD of regional mass anomaly versus time. Left panel: sparse DDK; right panel: DDK2, DDK3 and DDK4. From top to bottom are results for Amazon Basin, Congo Basin, Ganges Basin, Yangtze Basin, Alaskan glaciers, and Greenland. Note that the error covariance matrices QDDK in DDK1, DDK2 and DDK3 are recovered from block-diagonal filtering matrix FDDK and signal covariance matrix SDDK.
Remotesensing 14 02810 g006
Figure 7. The geoid degree-variance of filtered monthly solutions for: (a) April 2004, (b) April 2010. The high-degree variances of sparse DDK filtered solutions are strictly equal to zero.
Figure 7. The geoid degree-variance of filtered monthly solutions for: (a) April 2004, (b) April 2010. The high-degree variances of sparse DDK filtered solutions are strictly equal to zero.
Remotesensing 14 02810 g007
Figure 8. Assessment of the time-series of mean mass anomalies per lake extracted from the eight GRACE solution time-series under consideration: noise standard deviation (left) and relative signal retaining (right).
Figure 8. Assessment of the time-series of mean mass anomalies per lake extracted from the eight GRACE solution time-series under consideration: noise standard deviation (left) and relative signal retaining (right).
Remotesensing 14 02810 g008
Figure 9. Noise StD for the eight filtered solution time-series under consideration estimated at the nodes of a global 1° × 1° grid with the VCE method.
Figure 9. Noise StD for the eight filtered solution time-series under consideration estimated at the nodes of a global 1° × 1° grid with the VCE method.
Remotesensing 14 02810 g009
Figure 10. The sparsity rate (left axis) and maximum SH degree (right axis) of sparse DDK filtered solutions from January 2004 to December 2010. The sparsity rate is defined as the percentage of the number of nonzero elements to total number. Note that the degree 0 and 1 terms are not considered. The maximum SH degree is the highest degree of nonzero elements in each sparse DDK filtered solution.
Figure 10. The sparsity rate (left axis) and maximum SH degree (right axis) of sparse DDK filtered solutions from January 2004 to December 2010. The sparsity rate is defined as the percentage of the number of nonzero elements to total number. Note that the degree 0 and 1 terms are not considered. The maximum SH degree is the highest degree of nonzero elements in each sparse DDK filtered solution.
Remotesensing 14 02810 g010
Table 1. Properties of different decorrelations filters. DR: DDK regularization [4,5]; ANS: anisotropic non-symmetric [3]; VADER: time variable decorrelation [6]; Spar: sparse DDK filter, proposed herein.
Table 1. Properties of different decorrelations filters. DR: DDK regularization [4,5]; ANS: anisotropic non-symmetric [3]; VADER: time variable decorrelation [6]; Spar: sparse DDK filter, proposed herein.
PropertiesDDKANSVADERSpar
Filter Time Variable?No/YesYesYesYes
Error Covariances Time Variable?No/YesYesYesYes
Signal Covariances Time Variable?NoNoYesYes
Spatially Inhomogeneous?YesYesYesYes
Anisotropic?YesYesYesYes
Independent of Prior Models?NoYesYesYes
Filter Sparse?Yes/NoNoYesYes/No
Solution Sparse?NoNoNoYes
Table 2. The RMSD of the regional total mass anomaly of different filtered solutions with respect to mascon solutions. The results are calculated from the right panel of Figure 5.
Table 2. The RMSD of the regional total mass anomaly of different filtered solutions with respect to mascon solutions. The results are calculated from the right panel of Figure 5.
FilterAmazonCongoGangesYangtzeAlaskaGreenland
G30066.542.019.518.979.0212.8
Swen76.348.720.920.2120.6457.1
S30099.747.826.617.1136.3456.3
DDK244.139.419.118.769.7200.5
DDK344.641.116.219.762.3189.3
DDK445.241.316.220.262.1185.4
Spar52.342.521.125.263.1191.7
Table 3. The mean formal error StD of regional mass anomalies calculated from the left panel of Figure 6.
Table 3. The mean formal error StD of regional mass anomalies calculated from the left panel of Figure 6.
IndicatorAmazonCongoGangesYangtzeAlaskaGreenland
MAP with σ167.850.816.920.538.026.5
EPL with σ160.242.512.117.028.323.2
MAP with σ267.550.616.820.437.826.4
EPL with σ260.042.412.116.928.223.1
Table 4. The second and third columns summarize the results of the lake tests, showing the mean values of the estimated noise StD and weighted mean values of the relative signal retaining, respectively. Both quantities are the result of averaging over all five considered lakes. The last column reports the global estimates of noise standard deviations obtained with the VCE method.
Table 4. The second and third columns summarize the results of the lake tests, showing the mean values of the estimated noise StD and weighted mean values of the relative signal retaining, respectively. Both quantities are the result of averaging over all five considered lakes. The last column reports the global estimates of noise standard deviations obtained with the VCE method.
Lake TestsGlobal
GRACE SolutionsRelative Signal Retaining (%)Noise StD
(cm EWH)
Noise StD
(cm EWH)
Unfi110.5 ± 10.418.323.6
G30059.0 ± 2.43.83.8
Swen26.8 ± 2.65.35.7
S30024.1 ± 1.72.61.3
DDK244.4 ± 1.72.81.6
DDK362.0 ± 2.03.32.4
DDK466.7 ± 2.23.52.8
Spar50.8 ± 2.13.12.3
Table 5. The regional average annual amplitude of filtered mass anomaly solutions in six areas of different latitudes. The results are based on data from January 2004 to December 2010 and presented in EWH (cm).
Table 5. The regional average annual amplitude of filtered mass anomaly solutions in six areas of different latitudes. The results are based on data from January 2004 to December 2010 and presented in EWH (cm).
RegionG300SwenS300DDK2DDK3DDK4MasconSpar
Amazon21.423.020.622.723.223.323.023.2
Congo12.113.311.612.813.213.313.013.1
Ganges12.813.412.013.413.914.013.713.7
Yangtze5.45.54.65.15.85.95.55.4
Alaskan glaciers10.28.37.510.211.011.212.110.6
Greenland4.22.92.24.34.74.86.74.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qian, N.; Chang, G.; Ditmar, P.; Gao, J.; Wei, Z. Sparse DDK: A Data-Driven Decorrelation Filter for GRACE Level-2 Products. Remote Sens. 2022, 14, 2810. https://doi.org/10.3390/rs14122810

AMA Style

Qian N, Chang G, Ditmar P, Gao J, Wei Z. Sparse DDK: A Data-Driven Decorrelation Filter for GRACE Level-2 Products. Remote Sensing. 2022; 14(12):2810. https://doi.org/10.3390/rs14122810

Chicago/Turabian Style

Qian, Nijia, Guobin Chang, Pavel Ditmar, Jingxiang Gao, and Zhengqiang Wei. 2022. "Sparse DDK: A Data-Driven Decorrelation Filter for GRACE Level-2 Products" Remote Sensing 14, no. 12: 2810. https://doi.org/10.3390/rs14122810

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop