Next Article in Journal
Constructing Solutions to Multi-Term Cauchy–Euler Equations with Arbitrary Fractional Derivatives
Previous Article in Journal
Mental-Health: An NLP-Based System for Detecting Depression Levels through User Comments on Twitter (X)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regularization Total Least Squares and Randomized Algorithms

1
School of Mathematics and Statistics, Qinghai Minzu University, Xining 810007, China
2
School of Mathematics and Information Science, Baoji University of Arts and Sciences, Baoji 721000, China
3
School of Mathematics, Southeast University, Nanjing 210096, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(13), 1927; https://doi.org/10.3390/math12131927
Submission received: 15 May 2024 / Revised: 10 June 2024 / Accepted: 17 June 2024 / Published: 21 June 2024

Abstract

:
In order to achieve an effective approximation solution for solving discrete ill-conditioned problems, Golub, Hansen, and O’Leary used Tikhonov regularization and the total least squares (TRTLS) method, where the bidiagonal technique is considered to deal with computational aspects. In this paper, the generalized singular value decomposition (GSVD) technique is used for computational aspects, and then Tikhonov regularized total least squares based on the generalized singular value decomposition (GTRTLS) algorithm is proposed, whose time complexity is better than TRTLS. For medium- and large-scale problems, the randomized GSVD method is adopted to establish the randomized GTRTLS (RGTRTLS) algorithm, which reduced the storage requirement, and accelerated the convergence speed of the GTRTLS algorithm.

1. Introduction

In practical problems, many discrete ill-conditioned problems arising from many different fields of physics and engineering can be reduced to solving linear equations in the form of A x b . The methods used commonly are least squares (LS) [1] and total least squares (TLS) [2,3]. However, these kinds of problems are often ill-conditioned, such as the first kind of integral equations [4,5]. In order to reduce the serious instability caused by the problems themselves, regularization treatment [6,7,8,9,10,11] becomes an effective method, that is, replacing the original ill-conditioned problem with an adjoining well-conditioned one, whose solution is called a regularized solution to approximate the true one. We know that Tikhonov regularization is one of the common methods, which is widely used in the industrial field [6]. For example, Tikhonov regularized TLS (TRTLS) proposed by Golub, Hansen, and O’Leary can be used to approach the true solution. During the process, the bidiagonalization technique is used. It is shown that the ideal approximation solution cannot be obtained by the truncation singular value method in some practical problems. The total least squares problem with the general Tikhonov regularization (TRTLS) is a non-convex optimization problem with local non-global minimizers. Xia [12] proposed an efficient branch-and-bound algorithm (algorithm BTD) for solving TRTLS problems guaranteed to find a global ϵ -approximation solution in most O ( 1 / ϵ ) iterations, and the computational effort in each iteration is O ( n 3 l o g ( 1 / ϵ ) . Beamforming is one of the most important techniques for enhancing the quality of signal in array sensor signal processing, and the performance of a beamformer is usually related to the design of array configuration and beamformer weight. In [13], Chen first proposed a design model for a proximal sparse beamformer, which obtains sparse and robust filter coefficients by solving the composite optimization problem. The objective function of the model is the sum of the least squares term, the approximate term, and the l 1 -regularization term.
Hansen often uses generalized singular value decomposition (GSVD) to analyze regularization methods [14]. However, using the GSVD method to solve large-scale discrete ill-conditioned problems requires a large amount of computation and memory requirement. For this kind of problem, Martin and Reichel [9] proposed a method to find the corresponding truncated regularization (TR) solution by using low-rank partial singular value decomposition. In order to improve the time complexity, this paper uses GSVD technology to deal with Tikhonov regularization TLS and establishes Tikhonov regularization TLS based on the GSVD (GTRTLS) algorithm. At the same time, for medium- and large-scale problems, in order to reduce the storage requirements and accelerate the speed of GSVD, the randomized GSVD method [15,16] is used, and then we obtain the randomized GTRTLS (RGTRTLS) algorithm. For the randomized algorithms of large-scale matrix decompositions and their application to ill-conditioned problems, one can see [17,18,19,20] for examples and details.
Our main contribution is to use GSVD technology to deal with Tikhonov regularization TLS (GTRTLS) and to adopt the randomized techniques of [15,16] to implement the GTRTLS procedure in the regularization. The randomized GSVD requires much less storage and computational time than the classical schemes. Numerical examples show the effectiveness and superiority of our algorithms.
This paper is organized as follows: Section 2 describes our technique of combining Tikhonov regularized TLS and GSVD. Section 3 contains our randomized algorithms, and their error analyses for randomized algorithms are in Section 4. The improvement in time and memory requirements is illustrated with numerical examples in Section 5. Section 6 concludes this paper.

2. Tikhonov Regularization TLS and GSVD

The regularized TLS problem can be expressed as
min A , b A ¯ , b ¯ F   s . t .   A ¯ x = b ¯ ,   L x 2 δ
where δ is a positive constant. Typical examples of the matrix L are the first derivative approximation L 1 and the second derivative approximation L 2 , which are as follows (see [14], Equation (1.2), and [21], Equation (4.57)):
L 1 = 1 1 1 1 ( n 1 ) × n ,                   L 2 = 1 2 1 1 2 1 ( n 2 ) × n .
More precisely, derivative-based finite-difference methods L 1 and L 2 are approximations of the first and second derivative operators on a uniform grid, where the scaling factor is ignored.
The corresponding Lagrange multiplier formulation is
L A ¯ , x , μ = A , b A ¯ , b ¯ F 2 + μ ( L x 2 2 δ 2 )
where μ is the Lagrange multiplier.
To ensure that the TRTLS problem (1) has a unique solution, throughout this paper, we assume that
σ m i n ( AK ) > σ m i n ( ( A K , b ) ) ,
where K R n × s is a matrix whose columns form an orthonormal basis of the null-space of the regularization matrix L, and σ m i n denotes the minimal singular value of its argument [4].
A popular approach to overcoming numerical instability is Tikhonov regularization TLS [4]. It can be seen that the regularized total least squares solution can be obtained from the following theorem (see reference [7]):
Theorem 1
([7]). With the inequality constraint replaced by equality, the TRTLS solution x to (1) is a solution to the problem
A T A + λ I I n + λ L L T L x = A T b ,
where the parameters  λ I  and  λ L   are given by
λ I = b A x 2 2 1 + x 2 2 ,
λ L = μ (   1 + x 2 2 ) ,
and where  μ  is the Lagrange multiplier in (2).  λ I   a n d   λ L   are related by
λ L δ 2 = b T b A x + λ I .
Moreover, the TLS residual satisfies 
( A , b ) ( A ¯ , b ¯ ) F 2 = λ I .
For problem (1), we have the following assumptions:
A R m × n ,   L R p × n ,   m n p ,   r a n k L = p ,
N A N L = 0 r a n k A L = p
According to the literature [1,12,22], the GSVD of matrix for { A , L } is
A = U Σ 0 X 1 , Σ = Σ p I n p ,         L = V M 0 X 1 ,
where U R m × m and V R p × p are orthonormal matrices, X is an invertible matrix. The matrices Σ p , M are diag { σ 1 , σ 2 , , σ p } and diag { μ 1 , μ 2 , , μ p } , respectively, satisfying 0 σ 1 σ 2 σ p 1 , 1 μ 1 μ 2 μ p 0 , σ i 2 + μ i 2 = 1 , i = 1 , 2 , , p .
It can be seen that (4) is equivalent to the augmented system
I m 0 A 0 I p λ L 1 / 2 L A T λ L 1 / 2 L T λ I I n r s x = b 0 0 .
In order to improve the time complexity, this paper uses GSVD technology to deal with Tikhonov regularization TLS. In the first step, we reduce the GSVD of { A , L } to (7)
I m 0 Σ 0 X 1 0 I p λ L 1 / 2 M 0 X 1 X T Σ T 0 λ L 1 / 2 X T M T 0 λ I I n U T r V T s x = U T b 0 0
Let U = U 1 , U 2 ; then, we have
I n 0 Σ X 1 0 I p λ L 1 / 2 M 0 X 1 X T Σ T λ L 1 / 2 X T M T 0 λ I I n U 1 T r V T s x = U 1 T b 0 0
or
Σ T Σ + λ I X T X + λ L M T M 0 0 0 X 1 x = Σ T U 1 T b .
In the second step, using the Elden algorithm [4], only p steps of Givens transformation are needed to eliminate the λ L 1 / 2 M 0 , which can be expressed as
Σ λ L 1 / 2 M 0 = G Σ ^ 0 = G 11 G 12 G 21 G 22 Σ ^ 0 .
When G is applied to the augmented system (9), we have
I n 0 Σ ^ X 1 0 I p 0 X T Σ ^ T 0 λ I I n r ^ s ^ x = G 11 T U 1 T b G 12 T U 1 T b 0 .
Since the solution of s ^ can be obtained from the above formula, only the following system can be considered:
I n Σ ^ X 1 X T Σ ^ T λ I I n r ^ x = G 11 T U 1 T b 0 .
In the third step, Σ ^ X 1 is reduced to n × n bidiagonal matrix B by orthogonal transformation, such that H T Σ ^ X 1 K = B ,
I n B B T λ I I n H T r ^ K T x = H T G 11 T U 1 T b 0 .
Finally, through a series of Givens transformations, the above system, whose coefficient matrix can be transformed into a   2 n × 2 n symmetric indefinite tridiagonal matrix, can be solved by Gaussian partial principal component selection strategy.
To sum up, we call the above algorithm a Tikhonov regularized total least squares algorithm using GSVD technology. It is called Tikhonov regularized total least squares based on generalized singular value decomposition (GTRTLS algorithm for short).
Remark 1.
In order to overcome the ill-posedness, we can discard the element close to 0 of item Σ  in GSVD, that is, truncated GSVD (TGSVD)
A k = U Σ p ( n k ) I n p X 1 = U 0 σ n k + 1 σ p I n p X 1
and  L  (see Equation (6)), where  Σ p ( n k )   ( n p k n )  equals  Σ p  with the smallest   n k   σ i ’s being replaced by zeros. In TGSVD, the main information of the original system is retained by choosing the appropriate parameter k, and then the truncated system is obtained by the truncation regularization method. In other words, we combine truncated GSVD and TR to achieve a better regularization effect, which is called TGTRTLS; the expression is as follows:
A k T A k + λ I I n + λ L L T L x k , λ = A k T b .
Remark 2.
According to Theorem 1, combined with Formula (10), the values of parameters  λ I  and  λ L  can be given more effectively. Statistical aspects of a negative regularization parameter in Tikhonov’s method are discussed in [7].

3. Randomized GTRTLS Algorithms

In recent years, there have been many research results of randomized algorithms [15,16]. In the truncated case, the randomized algorithm can take the subspace as a random sample and capture most of the information of the matrix, that is, a large-scale problem can be randomly projected into a smaller subspace and also contain its main information, and then some regularization methods are used to solve the small-scale problem. In particular, for severe ill-conditioned problems, we find that GSVD combined with the randomized algorithm is more effective than the classical GSVD method. The general idea is as follows: First of all, with high probability, one can select an orthonormal matrix Q R m × k + s such that A Q Q T A c σ ¯ k + 1 , where σ ¯ k + 1 is the (k + 1)-th largest singular value of A, and c is a constant which depends on k and the oversampling parameter s. It satisfies that R ( A T Q ) R ( A T ) ; here, R A T Q is the approximate subspace spanned by the dominant right singular vectors of A . Next, a matrix ( Q T A T , L T ) T with a small scaled is obtained which can be used to calculate the GSVD of ( A T , L T ) T , approximately
A L Q Q T A L = U V C S Z
where U R m × l and V R p × p are orthonormal, Z = X 1 R n × n is nonsingular, and C R l × n and S R p × n are rectangular diagonal matrices. Randomized sampling can be used to identify a subspace that captures most of the action of a matrix [15]. It provides us with an efficient way for truncation. A large-scale problem is projected randomly to a smaller subspace that contains the main information; then, the resulting small-scale problem can be solved by some regularization methods. Especially for severely ill-posed problems, randomized algorithms are much more efficient than the classical GSVD. So, the advantage of this algorithm is obvious when m n . The detailed implementation process is shown in reference [16]. For the convenience of reading, we describe it as follows (Algorithm 1):
Algorithm 1. Randomized GSVD
Input:  A R m × n , L R p × n , where n p < l < m i n { m , n } .
Output: Orthonormal U R m × l and V R p × p , rectangular diagonal C R l × n     and
S R p × n and nonsingular Z = X 1 R n × n .
   1: Generate an n × l Gaussian random matrix Ω ;
   2: Form the m × l matrix Y = A Ω ;
   3: Compute the m × l orthonormal matrix Q via the QR factorization Y = Q R ;
   4: Form the l × n matrix B = Q T A ;
   5: Compute the GSVD of { B , L } in (11): B L = W V C S X 1 ;
   6: Form the matrix U R m × l , U = Q W and denote Z = X 1 in (12).
Now, we use randomized GSVD technology to deal with Tikhonov regularization TLS. In the first step, the approximate augmented system of augmented system (7) can be obtained by using randomized GSVD
I m 0 Q Q T A 0 I p λ L 1 / 2 L A T Q Q T λ L 1 / 2 L T λ I I n r s x = b 0 0 ,
and we have
I l 0 C X 1 0 I p λ L 1 / 2 S X 1 X T C T λ L 1 / 2 X T S T λ I I n U T r V T s x = U T b 0 0
In the second step, we use Givens transformation to eliminate λ L 1 / 2 S , which can be expressed as
C λ L 1 / 2 S = G ¯ Σ ¯ 0 = G ¯ 11 G ¯ 12 G ¯ 13 G ¯ 21 G ¯ 22 G ¯ 23 G ¯ 31 G ¯ 32 G ¯ 33 Σ ¯ 0 ,
where G ¯ 11 R l × l , G ¯ 22 R ( n l ) × ( n l ) , G ¯ 33 R ( l + p n ) × ( l + p n ) , Σ ¯ R n × n .
Let V = V 1 V 2 , V 1 R p × ( n l ) , V 2 R p × ( l + p n ) ; when G ¯ is applied to the augmentation system, we can get
I n 0 Σ ¯ X 1 0 I l + p n 0 X T Σ ¯ T 0 λ I I n U T r V 1 T s V 2 T s x = G ¯ 11 T U T b G ¯ 12 T U T b G ¯ 13 T U T b 0
The solution of V 2 T s can be obtained from the above equation, so only the following system can be considered:
I n Σ ¯ X 1 X T Σ ¯ T λ I I n U T r V 1 T s x = G ¯ 11 T U T b G ¯ 12 T U T b 0 ,
or
Σ ¯ T Σ ¯ + λ I X T X X 1 x = Σ ¯ T G ¯ 11 T U T b .
In the third step, Σ ¯ X 1 is reduced to bidiagonal matrix B ¯ by orthogonal transformation such that H ¯ T Σ ¯ X 1 K ¯ = B ¯ ,
I n B ¯ B ¯ T λ I I n H ¯ T r ^ K ¯ T x = H ¯ T G 11 T U T b 0 .
Finally, through a series of Givens transformations, the above system, whose coefficient matrix can be transformed into a   2 n × 2 n symmetric indefinite tridiagonal matrix, can be solved by Gaussian partial principal component selection strategy.
To sum up, we call the above algorithm a Tikhonov regularized total least squares algorithm using randomized GSVD technology. It is called Tikhonov regularized total least squares based on randomized generalized singular value decomposition (RGTRTLS algorithm for short).

4. Error Analysis for Randomized Algorithms

First, we would like to review an important result of [16] regarding randomization algorithms.
Lemma 1
(see [15], Corollary 10.9). Suppose that A R m × n  has the singular values σ ~ 1 σ ~ 2 σ ~ n . Let G  be an n × (  k+s) standard Gaussian matrix with k + s ≤ min{m, n} and s ≥ 4, and let Q be an orthonormal basis for the range of the sampled matrix AG. Then,
A Q Q T A 1 + 6 ( k + s ) l o g s σ ~ k + 1 + 3 ( k + s ) j > k σ ~ j 2
with a probability that is not less than 1 3 s s .
Next, a basic theory of perturbation analysis for TRTLS problems is needed.
Theorem 2
([10]). Consider the TRTLS problem (1) and assume that the genericity condition σ n (A) > σ n + 1 ( ( A , b ) ) holds. If ( δ A ,   δ b ) F  is sufficiently small, then we find that
δ x 2 x 2 κ A δ A 2 A 2 + κ b δ b 2 b 2 ,
where
κ A = A 2 x 2 r 2 Z 1 K 2 + x 2 Z 1 K A T 2 ,   κ b = b 2 x 2 Z 1 K A T 2 ,
with
r = b A x ,   Z 1 = A T A + λ I I n + λ L L T L 1 ,   K = I n + L T L x x T L T L Z 1 x T L T L Z 1 L T L x .
Next, Lemma 1 is applied to the regularization system (1). Since the randomized system (11) can be seen as its perturbation, the following theorem is obtained from Theorem 2.
Theorem 3.
Let σ ~ 1 σ ~ 2 σ ~ n  be the singular values of matrix A, and  α = c / A 2 , c= 1 + 6 ( k + s ) l o g s + 3 ( k + s ) ( n k ) , with the matrix  ( A T , L T ) T as in (6). Suppose that Algorithm 1 is executed using the Gaussian matrix G to achieve GSVD approximation of matrix pairs  ( A T , L T ) T . Assumption (5) is satisfied.    x t r t l s   is the solution of (1), and    x g t r t l s   is the minimum two-norm solution of the problem  (11),  δ x =   x t r t l s -   x g t r t l s . Then we have
δ x 2 x t r t l s 2 α A 2 x t r t l s 2 r 2 + b 2 + A 2 x t r t l s 2 x t r t l s 2 Z 1 K A T 2 σ ~ k + 1 + O ( σ ~ k + 1 )
with a probability greater than  1 3 s s .

5. Numerical Examples

In this part, we illustrate the effectiveness and superiority of our methods through specific examples. We use the regularization tool package to perform the calculation on MATLAB R2016a.
Example 1.
The test problem is obtained by executing the function ilaplace (n, 2). The matrix A and the exact solution x are given such that  A F = A x 2 = 1 , and the perturbed right-hand side is generated as  b = A + σ E F 1 E x + σ e 2 1 e , where the perturbations E and e are formed by a normal distribution with zero mean and unit standard deviation. L is the first derivative operator. The dimensions are  m = n = 39 . Noise levels are taken as  σ = 0.001   σ = 0.01   σ = 0.1   σ = 1 .
We see that for small values of σ and for the same value of λ L , the three methods result in almost identical minimum relative errors. However, for a larger value of σ , the minimum relative errors of the GTRTLS method and the RGTRTLS method are significantly smaller than that of the TRTLS method, and they occur at smaller values of λ L as shown in Table 1 and Figure 1. So, the potential advantages of the GTRTLS method and the RGTRTLS method are shown.
We find that the calculation time of the RGTRTLS method is less than that of the GTRTLS method, and the GTRTLS method is less than that of the TRTLS method, as shown in Table 2.
Example 2.
The test examples heat, phillips, baart, and shaw are from Hansen Tools [22]. Suppose σ is the relative noise level. We define
b ¯ = b + σ b τ τ
where  τ = 2   r  and  (m, 1)1. It is easy to verify that  b ¯ b / b = σ . We set  σ   = 0.001  and the size n = 1024 in the tests. The matrix  L   is  L 1   , and the regularization parameter  λ L   and  λ I   are selected by  Remark 2.
For a better understanding of the tables below, we list the notation here:
  • x is the true solution of the TLS problem (1).
  • x g s v d is the solution of (1) by classical GSVD.
  • x r g s v d is the approximate regularized TLS solution in (11) by randomized algorithms.
  • E g s v d = x x g s v d / x , E r g s v d = x x r g s v d / x are the relative errors.
  • The execution time (in seconds) of “GTRTLS” and “RGTRTLS” are t g s v d and t r g s v d , respectively.
For n = 1024, the corresponding errors and time are shown in Table 3, and the performance is shown in Figure 2.
We apply the GTRTLS algorithm and the RGTRTLS algorithm to Example 2 and compare the errors and execution times. The randomized approach in Algorithm 1 still shows good performance in Table 3 and is competitive compared with the classical GSVD, judging from the errors E g s v d and E r g s v d and the execution times t g s v d and t 1 g s v d .
We cannot solve large-scale or more complex ill-conditioned problems, such as n = 4096, using classical SVD or GSVD due to the high memory requirements. So, one can use preconditioned techniques first, and then use our method for computation.

6. Conclusions

In this paper, the generalized singular value decomposition technique is used to deal with Tikhonov regularized total least squares problems to approximate the true regularized TLS solutions, and the GTRTLS algorithm is proposed. The time complexity of the GTRTLS algorithm is better than TRTLS proposed by Golub, Hansen, and O’Leary. For medium- and large-scale problems, in order to reduce the storage requirements and accelerate the speed of GSVD, this paper adopts the random GSVD method and obtains the RGTRTLS algorithm. Numerical examples show that our algorithm has obvious effectiveness and superiority.

Author Contributions

Investigation, T.L.; Resources, X.L.; Writing—original draft, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Qinghai Province grant number [2018-ZJ-717].

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Golub, G.H.; Van Loan, C.F. Matrix Computations, 3rd ed; Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
  2. Golub, G.H.; Van Loan, C.F. An analysis of the total least squares problem. SIAM J. Numer. Anal. 1980, 17, 883–893. [Google Scholar] [CrossRef]
  3. Van Huffel, S.; Vandewalle, J. The Total Least Squares Problem: Computational Aspects and Analysis; SIAM: Philadelphia, PA, USA, 1991. [Google Scholar]
  4. Beck, A.; Ben-Tal, A. On the solution of the Tikhonov regularization of the total least squares problem. SIAM J. Optim. 2006, 17, 98–118. [Google Scholar] [CrossRef]
  5. Engl, H.W. Regularization methods for the stable solution of inverse problems. Surv. Math. Ind. 1993, 3, 71–143. [Google Scholar]
  6. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Kluwer: Dordrecht, The Netherlands, 1996. [Google Scholar]
  7. Golub, G.H.; Hansen, P.C.; O’Leary, D.P. Tikhonov regularization and total least squares. SIAM J. Matrix Anal. Appl. 1999, 21, 185–194. [Google Scholar] [CrossRef]
  8. Hua, T.A.; Gunst, R.F. Generalized ridge regression: A note on negative ridge parameters. Commun. Stat. Theory Methods 1983, 12, 37–45. [Google Scholar] [CrossRef]
  9. Martin, D.R.; Reichel, L. Projected Tikhonov regularization of large-scale discrete ill-posed problems. J. Sci. Comput. 2013, 56, 471–493. [Google Scholar] [CrossRef]
  10. Samar, M.; Lin, F. Perturbation and condition numbers for the Tikhonov regularization of total least squares problem and their statistical estimation. J. Comput. Appl. Math. 2022, 411, 114–230. [Google Scholar] [CrossRef]
  11. Tikhonov, A. Solution of incorrectly formulated problems and the regularization method. Soviet Math. Dokl. 1963, 5, 1035–1038. [Google Scholar]
  12. Xia, Y. A fast algorithm for globally solving Tikhonov regularized total least squares problem. J. Glob. Optim. 2018, 73, 311–330. [Google Scholar] [CrossRef]
  13. Chen, Y. Sparse broadband beamformer design via proximal optimization techniques. J. Nonlinear Var. Anal. 2023, 7, 467–485. [Google Scholar]
  14. Hansen, P.C. Regularization GSVD and truncated GSVD. BIT 1989, 29, 491–504. [Google Scholar] [CrossRef]
  15. Halko, N.; Martinsson, P.G.; Tropp, J.A. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 2011, 53, 217–288. [Google Scholar] [CrossRef]
  16. Wei, Y.M.; Xie, P.P.; Zhang, L.P. Tikhonov regularization and randomized GSVD. SIAM J. Matrix Anal. Appl. 2016, 37, 649–675. [Google Scholar] [CrossRef]
  17. Alipour, P. The dual reciprocity boundary element method for one-dimensional nonlinear parabolic partial differential equations. J. Math. Sci. 2024, 280, 131–145. [Google Scholar] [CrossRef]
  18. Avazzadeh, Z.; Hassani, H.; Ebadi, M.J.; Bayati Eshkaftaki, A.; Hendy, A.S. An optimization method for solving a general class of the inverse system of nonlinear fractional order PDEs. Int. J. Comput. Math. 2024, 101, 138–153. [Google Scholar] [CrossRef]
  19. Falsafain, H.; Heidarpour, M.R.; Vahidi, S. A branch-and-price approach to a variant of the cognitive radio resource allocation problem. Ad Hoc Netw. 2022, 132, 102871. [Google Scholar] [CrossRef]
  20. Larijani, A.; Dehghani, F. An efficient optimization approach for designing machine models based on combined algorithm. Fintech 2024, 3, 40–54. [Google Scholar] [CrossRef]
  21. Cichocki, A.; Zdunek, R.; Phan, A.H.; Amari, S.I. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  22. Hansen, P.C. Regularization tools, a MATLAB package for analysis of discrete regularization problems. Numer. Algorithm. 1994, 6, 1–35. [Google Scholar] [CrossRef]
Figure 1. Exact solutions, TRTLS solutions, GTRTLS solutions, and RGTRTLS solutions under four values of the noise levels σ .
Figure 1. Exact solutions, TRTLS solutions, GTRTLS solutions, and RGTRTLS solutions under four values of the noise levels σ .
Mathematics 12 01927 g001
Figure 2. The comparison for exact solutions, GTRTLS solutions, and RGTRTLS solutions under the value of the noise levels σ = 0.001 .
Figure 2. The comparison for exact solutions, GTRTLS solutions, and RGTRTLS solutions under the value of the noise levels σ = 0.001 .
Mathematics 12 01927 g002
Table 1. Relative-error table of TRTLS method, GTRTLS method, and RGTRTLS method.
Table 1. Relative-error table of TRTLS method, GTRTLS method, and RGTRTLS method.
n = 39 σ TRTLSGTRTLSRGTRTLS
ilaplace0.0010.10760.10430.1021
0.010.11170.10780.1070
0.10.15100.12400.1125
10.21590.17190.1409
Table 2. Time comparison table of TRTLS method, GTRTLS method, and RGTRTLS method.
Table 2. Time comparison table of TRTLS method, GTRTLS method, and RGTRTLS method.
n = 39 σ TRTLSGTRTLSRGTRTLS
ilaplace0.0010.0068320.0060000.004998
0.010.0058880.0050010.004012
0.10.0057480.0050030.003999
10.0056990.0049990.004005
Table 3. The comparison of GTRTLS method and RGTRTLS method.
Table 3. The comparison of GTRTLS method and RGTRTLS method.
n = 1024heatphillipsshawbaart
E g s v d 0.05650.00110.02580.0874
E r g s v d 0.0620.00410.02530.0896
t g s v d 18.3514.31519.44316.318
t r g s v d 1.79461.21591.83241.5146
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Z.; Liu, X.; Li, T. Regularization Total Least Squares and Randomized Algorithms. Mathematics 2024, 12, 1927. https://doi.org/10.3390/math12131927

AMA Style

Yang Z, Liu X, Li T. Regularization Total Least Squares and Randomized Algorithms. Mathematics. 2024; 12(13):1927. https://doi.org/10.3390/math12131927

Chicago/Turabian Style

Yang, Zhanshan, Xilan Liu, and Tiexiang Li. 2024. "Regularization Total Least Squares and Randomized Algorithms" Mathematics 12, no. 13: 1927. https://doi.org/10.3390/math12131927

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop