Abstract
The numerical approximation of both eigenvalues and singular values corresponding to a class of totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices, and quasi-rational Bernstein–Vandermonde structured matrices are well studied and investigated in the literature. We aim to present some new results for the numerical approximation of the largest singular values corresponding to Bernstein–Vandermonde, Bernstein–Bezoutian, Cauchy—polynomial-Vandermonde and quasi-rational Bernstein–Vandermonde structured matrices. The numerical approximation for the reciprocal of the largest singular values returns the structured singular values. The new results for the numerical approximation of bounds from below for structured singular values are accomplished by computing the largest singular values of totally positive Bernstein–Vandermonde structured matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices, and quasi-rational Bernstein–Vandermonde structured matrices. Furthermore, we present the spectral properties of totally positive Bernstein–Vandermonde structured matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices, and structured quasi-rational Bernstein–Vandermonde matrices by computing the eigenvalues, singular values, structured singular values and its lower and upper bounds and condition numbers.
MSC:
15A18; 05A05
1. Introduction
In numerical linear algebra, the design of some accurate as well as efficient numerical algorithms for a class of structured matrices remains an interesting topic in research in recent years. Among these structured matrices, an interesting class of matrices is totally positive matrices. The several classes of very structured matrices are studied in [1,2,3,4]. The important and interesting literature in the listed articles covers many aspects of both theory and application but does not contain topics such as accuracy and efficient numerical computations with such kinds of structured matrices. For more details on the computational aspects of structured totally positive matrices can be found in the work [5,6,7].
We consider a special class of totally positive structured matrices that are deeply studied and analyzed in [8] for solving a system of linear equations: Bernstein–Vandermonde matrices. Such a class of matrices has been used to solve and analyze least squares fitting problems while considering the Bernstein basis [9]. The Bernstein–Vandermonde structured matrices are the straightforward generalization of the Vandermonde structured matrices when the most suitable choice of Bernstein basis is considered rather than monomial basis corresponding to a space spanned by algebraic polynomials with degree bounded above by n. The Bernstein polynomials were originally discovered about a hundred years ago by Sergi Natanovich Bernstein in order to facilitate the most famous proof of the Weierstrass approximation theorem. The work by Bezier and de Casteljau introduces the Bernstein polynomials in computer-aided geometric design; see [10] for more details.
The numerical methods based upon Bezier curves are more popular in computer-aided geometric design (CAGD), see [11,12,13,14,15]. The Bernstein polynomials parameterize the Bezier curves reduce to Bernstein polynomial basis.
The theoretical basis to design fast and accurate algorithms in order to compute the greatest common divisor for the real type of polynomials and with degree of at most n and which are expressible in Bernstein polynomial basis , where , are studied in [16]. The fast algorithms to determine the required power form of the polynomials and are studied in [17,18] or its matrix counterparts [19] for the evaluation of GCD.
In [16], the Bezout form for polynomials and is defined by the expression of the form
Furthermore, the Bezoution matrices, while considering different polynomial bases, are studied by various authors, see [20,21,22,23,24].
The structured matrices, particularly both Vandermonde matrices and Cauchy matrices, appear in the vast areas of computation, see [25,26]. The Cauchy–Vandermonde structured matrices act as useful tools to study the numerical approximation of solution corresponding to singular integral equations; for more detail, we refer [27]. Such a type of structured matrices does occur in connection with numerical approximation of solutions of problems related to study quadrature [28]. In fact, Cauchy–Vandermonde matrices are ill-conditional matrices. The high accuracy of numerical approximation has been achieved for such a class of structured matrices while very carefully studying their specific structure properties [1,29,30,31,32,33,34,35,36].
The Vandermonde matrices appear during the study of interpolation problems in order to exploit the monomial basis [25]. The polynomial-Vandermonde matrices appear when the polynomial basis is considered rather than the monomial basis, and such matrices help to study many applications. A few included in the list are approximation, interpolation, and Gaussian quadrature [37,38,39,40,41,42].
An extensive amount of research has been done in order to analyze the high accuracy of numerical approximations for many classes of matrices having specific structures. This does includes a class of structured matrices includes: totally positive and totally negative matrices [6,43], totally non-positive matrices [44,45], matrices having rank-revealing decomposition [46,47], rank structured matrices [48,49], the diagonally dominant structured M-matrices [50,51] and the structured sign regular matrices [7,52]. The numerical approximation of eigenvalues for structured quasi-rational Bernstein–Vandermonde matrices up to high accuracy (relative) are studied in much greater detail in [53,54].
An extensive amount of work has been done in the direction of discussing both necessary and sufficient criteria for swarm stability asymptotically. The system under consideration achieves the asymptotically if and only if there exists Hermitian matrices satisfying complex Lyapunov inequality for all of the system vertex matrices [55].
The symmetry and asymmetry properties of orthogonal polynomials play a key role in solving system of differential equations that appear in mathematical modeling corresponding to real-world problems. The classical orthogonal polynomials, for instance, Hermit, Legendre, Laguerre, and Discrete, including Krawtchouk and Chebyshev, have numerous widespread applications across many very important branches of science and engineering. In [56], Chebyshev polynomials are used to discuss the simulation of a two-dimensional mass transfer equation subject to Robin and Neumann boundary conditions.
The Boolean complexity for the multiplication of structured matrices by a vector and the solution of nonsingular linear systems of equations with a class of such matrices is studied in [57]. The main focus is to study four basic and most popular classes, that is, Toeplitz, Hankel, Cauchy, and Vandermonde matrices, for which the cited computational problems are equivalent to the task of polynomial multiplication and division and polynomial and rational multipoint evaluation and interpolation.
In this article, we present the spectral properties of a class of structured matrices. We study the behavior of eigenvalues, singular values, and structured singular values for a class of structured matrices, that is, totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices and quasi-rational Bernstein–Vandermonde structured matrices. Furthermore, we also present the numerical approximation of conditioned numbers for structured matrices considered in the current study. Our proposed approach in a recent study differs from the methodology [58] where a low-rank ODE-based technique was developed to study the stability and instability analysis of linear time-invariant system appearing in control and was mainly based on a two level-algorithm, that is, inner-outer algorithm.
The key contribution of this paper is to study and investigate the spectral properties (particularly the computation of structured singular) of totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian matrices, Cauchy—polynomial-Vandermonde matrices, and structured quasi-rational Bernstein–Vandermonde matrices and this act as the novel contribution to this paper.
In Section 2 of this article, we aim to present the definitions of structured totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian matrices, Cauchy—polynomial-Vandermonde matrices and quasi-rational Bernstein–Vandermonde matrices.
We give a brief and concise introduction to the numerical approximation of the structured singular values in Section 3. Section 4 contains the main results on the numerical computation of the largest and the smallest singular values. Furthermore, the exact behavior of the largest and smallest singular values is also discussed. In Section 5, we present numerical experimentation for Bernstein–Vandermonde and Bernstein–Bezoutian matrices. The numerical approximation of eigenvalues and both singular values and structured singular values are also analyzed and presented. The numerical experimentation for spectral quantities of Cauchy-polynomial-Vandermonde structured matrices and quasi-rational Bernstein–Vandermonde structured matrices are presented in Section 6 and Section 7, respectively. Section 8 contains the numerical testing for the comparison between the approximated lower bounds of structured singular values for a class of higher dimensional structured Bernstein–Vandermonde matrices. Finally, in Section, we present concluding remarks.
2. Preliminaries
Definition 1
([59]). The Bernstein basis on the closed interval for the space of polynomials of degree less than or equal to n is defined as
Definition 2
([59]). The Bernstein–Vandermonde structured matrix for and nodes is defined as
Definition 3
([16]). The transformation matrix between Bernstein and power basis is defined as
where
Definition 4
([16]). The Bezoutian matrix for polynomials having at most the degree k, that is, , in the Bernstein basis is defined as
Definition 5
([60]). The coefficient matrix (associated with an interpolation problem) of the form
is known to be a Polynomial-Vandermonde structured matrix if and a Cauchy matrix if . Otherwise, it is a Cauchy—polynomial-Vandermonde structured matrix.
Definition 6
([60]). For a sequence of positive integers (in strict sense) and the Bernstein basis of the space of polynomials on the closed interval having the degree less than or equal to , the rational Bernstein basis are defined as
with and
Definition 7
([60]). For rational Bernstein basis , the rational Bernstein–Vandermonde matrix is defined as
Definition 8
([60]). The quasi-rational Bernstein–Vandermonde structured matrix is defined as
where perturbation parameter satisfies
3. Structured Singular Values
In this section, we introduce a mathematical quantity appearing in control theory known as structured singular value, that is, -values. Let and . The matrix represents the admissible uncertainty from the set of block diagonal matrices (or set of uncertainties), that is,
Definition 9.
For a given A, the structured singular value dented with is defined as
If there exist no such uncertainty such that becomes zero, then
By definition, the mathematical quantities 9, denotes the determinant of some structured matrix under consideration. The exact computation of structured singular value or -value for a class of very large-scale structured matrices is NP-hard [61]. Because of this, one needs to numerically approximate the tighter bounds of structured singular values. The numerical computation of the tighter lower bounds of structured singular values or -values ensures enough information to study and discuss the instability of the dynamical system. On the other hand, the numerical approximation of an upper bound of structured singular value or -value helps in a great manner to study and discuss the stability analysis of systems.
4. Main Results
In this section, we aim to present our new and main results concerning the numerical approximation of the largest singular value of . Furthermore, we also discuss the increasing behavior of and the decreasing behavior of . The following Theorem 1 allows the computation of .
Definition 10.
For a given matrix , the scalars are called the eigenvalues of A such that where I denotes an identity matrix possessing the same dimension as of the matrix A.
Definition 11.
For a given matrix , the non-negative numbers are known as the singular values of A if A can be decomposed as , where the matrices U and V are the orthogonal and Σ is a diagonal matrix having on its main diagonal.
Lemma 1.
Let be matrix family (smooth). Let denotes the eigenvalue of , that converges to an eigenvalue (simple) of as . Then, the continuous branch of eigenvalues is analytic nearby having
Here, and and denotes left and right hands singular vectors of corresponding to
Theorem 1.
For a given , , the largest singular value is obtained as
where
and
Proof.
First, we show that the above-given expressions for and are valid. We consider the following factorization of A. For the following factorization of A, we refer interested readers to see [62].
Here, D denotes a diagonal non-negative matrix while E is of the full-rank.
For the quantity , which is the numerator of , we make use of arithmetic-geometric-mean inequality that allows us to write the following inequality for singular values of D.
Next, we make use of arithmetic-geometric-mean inequality on the quantity , which yields
Equations (12) and (13) allows us to write
Finally, from Equation (14), we have
For the quantity which is the denominator of , we make use of arithmetic-geometric-mean inequality for the singular values of D as,
In addition,
The inequalities in Equations (15) and (16) yields
Because we know that the matrix 2-norm of matrix D can be written as
Therefore, Equations (17) and (18) implies that
or
In a similar way, we can obtain the expressions for the numerator and denominator of . Now, we aim to prove that .
Since, and . Thus, the matrix D takes the form of and . We obtain the following expression while making use of the singular value decomposition of yields
From [63], the can be written as
with
In a similar way from [63], takes the following form as, that is,
with .
The singular value decomposition of yields where As,
and
□
The increasing behavior of for is given in Theorem 2. Furthermore, and . For simplicity, we omit the dependency of and on t in Theorem 2.
Theorem 2.
Let are submatrices of , and let be the largest singular values of and denotes the largest singular values of . The largest singular values of satisfies the inequality
Proof.
For , the submatrices and matrices can be written as
and
In Equations (19) and (20), denotes all the k components of sub-matrix and denotes all components of for . Let and denotes the left and right hand sides singular vectors of , then
From Equation (21), we have
In Equation (22), , and denote the right and left-hand sides singular vectors corresponding to a family of matrices , respectively. From Equations (21) and (22), we have
□
The decreasing behavior of for is given in Theorem 3.
Theorem 3.
Let are sub-matrices of , and let be the smallest singular values of and are the smallest singular values of . The smallest singular vales satisfies the inequality
Proof.
For , the matrices and can be written as
and
Let and be left hand and right hand singular vectors of , then
Now,
□
Next, we aim to fix the largest singular value for such that . For this purpose, we make use of an inner-outer algorithm. The main objective is to develop and then solve an optimization problem. In turn, this optimization problem yields a system of ordinary differential equations (ODEs). On the other hand, for the case of the outer algorithm, our main aim is to modify the perturbation level via fast Newton’s iteration. For more details, we refer [58].
5. Total Positive Bernstein–Vandermonde and Bernstein–Bezoutian Matrices
The following theorem guarantees us that if all of the computed minors are non-negative and respectively positive, then the given original matrix appears to be a totally positive (respectively, strictly totally positive) matrix.
Theorem 4
([59]). A structured matrix is strictly totally positive if its Neville elimination [21, 22] is performed without making any changes in the row and column. For A and , multipliers of Neville elimination are positive, and A has positive diagonal pivots for its Neville elimination.
Proof.
For proof, see [59]. □
The Bernstein–Vandermonde matrix appears to be a strictly totally positive structured matrix for given nodes . The obtained result is the consequence of the following theorem.
Theorem 5
([59]). Let be a Bernstein–Vandermonde structured matrix. The existing nodes of M satisfies , then satisfies the matrix factorization possessing the form
where are bi-diagonal structured matrices of the form
are bi-diagonal structured matrices of the form
and the matrix D is diagonal having order such that
For the given matrix M, are the possible multipliers of Neville elimination and are obtained as
For , the possible quantities are the multiplies of Neville elimination and are obtained as
Finally, the -diagonal element of D is the diagonal pivot of Neville elimination of M and is expressed as
Proof.
It can be seen in [59]. □
Spectral Properties of Total Positive Bernstein–Vandermonde and Bernstein–Bezoutian Matrices
In this subsection, we aim to present the spectral properties of Bernstein–Vandermonde structured matrices and Bernstein–Bezoutian structured matrices. These structured matrices are taken from the paper by [59] for the numerical approximations of structured singular values.
In our numerical experimentation, we make use of MATLAB functions and for the numerical approximation of both eigenvalues and singular values. The main aim is to numerically approximate the lower bounds of structured singular or -value, which is nothing but the straightforward generalization of singular values for a class of structured matrices. Furthermore, the use of MATLAB functions mussv is considered for the numerical approximation of both lower and upper bounds corresponding to structured singular values.
Example 1.
We consider a Bernstein–Vandermonde structured matrix
The first column of Table 1 represents the numerically approximated eigenvalues of via MATLAB function . The second column represents the singular values approximated by MATLAB function . The third and fourth columns represent the numerical approximation of both upper and lower bounds of structured singular values approximated by MATLAB function mussv. The numerical approximation to the lower bounds of structured singular values or -values approximated by methodology based on low-rank ODEs [58] is represented in the very last column of the table.
In Figure 1, the left-hand side subfigure represents the plots of eigenvalues, singular values, and the numerical approximation of both lower and upper bounds of structured singular values against the time t. In Figure 1, the blue color dotted line starting from point at the bottom of the left side figure denotes the spectrum, that is, the eigenvalues of . Because singular values are non-negative numbers, the red color dotted line starting from point indicates that eigenvalues are bounded above by singular values. The golden color dotted line starting from point shows that all quantities, that is, eigenvalues, singular values, and numerically approximated lower bounds corresponding to structured singular values approximated by mussv function, which is represented with a purple color dotted line starting from point and the numerical approximation to the lower bounds of structured singular values approximated with [58] represented with turquoise color dotted line starting from point are strictly bounded by the numerical approximation of the upper bounds of structured singular values or -values with the help of MATLAB function mussv.
Figure 1.
Behaviour of eigenvalues, singular values, structured singular values and condition numbers.
In Figure 1, the right-hand side subfigure represents the plots of condition numbers vs. time. Furthermore, the behaviour of spectral condition numbers starting from point , starting from point and starting from point is shown in right hand side figure of Figure 1.
In Figure 1, the behavior of the continuous branch of the eigenvalues (in terms of absolute values) of the three-dimensional Bernstein–Vandermonde matrix is shown with a dotted blue line in the left subfigure. This continuous branch of eigenvalues is dominated by a continuous branch of singular values (denoted with a red dotted line) for the values of This is because of the fact that the computation of the singular values is the generalization of eigenvalues. However, surprisingly, after , the behavior of both eigenvalues and singular values abruptly changes. The dotted line with the light olive color (the topmost line) represents how the numerical approximation of upper bounds corresponding to the structured singular values computed with MATLAB function mussv acts and behaves. All continuous branches of eigenvalues, singular values, and numerically approximated lower bounds of structured singular values are bounded above by the numerical approximation of the upper bounds of structured singular values or -values (light olive color). This is possible because of the fact that the numerical approximation of structured singular value or -value is the generalization of both eigenvalues and singular values.
Example 2.
The following Bernstein–Bezoutian matrix is taken from [16]. The polynomials
and
gives the matrix
The first column appearing in Table 2 denotes the eigenvalues of computed by MATLAB function . The second column represents the singular values approximated with MATLAB function . The third and fourth columns represent the approximation (numerically) of both upper and lower bounds of structured singular values or -values approximated by MATLAB function mussv. The numerically computed lower bounds of structured singular values or -values via [58] are represented in the very last column.
The behaviour of spectral condition numbers for all , the condition numbers and for all are shown in Table 3.
6. Cauchy—Polynomial-Vandermonde Matrices (CPV Matrices)
The following lemma by Zhao Yang [60] provides results for the determinant of CPV-matrices.
Lemma 2
([60]). Let be a Cauchy—polynomial-Vandermonde matrix, then the expression for the determinant is given as
Proof.
For proof, see [60]. □
The minors of Cauchy—polynomial-Vandermonde matrices are computed with the help of the following result.
Theorem 6
([60]). Let be a Cauchy—polynomial-Vandermonde matrix, then
which holds true for all .
Furthermore, we have that
where
with
Proof.
For proof, see [60]. □
Spectral Properties of Cauchy—Polynomial-Vandermonde Matrices
In this subsection, we aim to present and discuss the meaningful spectral properties of Cauchy—polynomial-Vandermonde structured matrices. For the numerical approximations of desired structured singular values or -values, we let a class of test matrices from the paper by [60].
We make use of built-in MATLAB functions and to numerically approximate both the eigenvalues and singular values. Thus, our main aim is to approximate numerically the obtained lower bounds of structured singular values, a simple and straightforward generalization of singular values for structured matrices. Furthermore, we make use of the built-in MATLAB functions mussv to approximate numerically both the lower and upper bounds of structured singular values or or -values for the structured matrices.
Example 3.
Consider a Cauchy—polynomial-Vandermonde matrix
The matrix is obtained with and which are given as
For , we consider
The first column of Table 4 denotes the approximated eigenvalues of computed by MATLAB function . The second column represents the singular values computed with MATLAB function . The third and fourth columns represent the approximation (numerically) of both upper and lower bounds of structured singular values, which are approximated by MATLAB function mussv. The numerical approximations of the lower bounds of structured singular values or the or -values with the help of algorithm [58] is represented in the very last column.
In Figure 2, the left-hand side subfigure represents the plots of eigenvalues, singular values, and numerical approximation to both lower and upper bounds of structured singular values or -values against the time t. In Figure 2, the blue color dotted line identified with point denotes the spectrum, that is, the eigenvalues of . Because singular values are non-negative numbers, the red color dotted line starting from point indicates that all eigenvalues are bounded above by singular values. The golden color dotted line starting from point shows that all quantities, that is, positive eigenvalues, singular values, and numerically approximated bounds from below for structured singular values or -values via mussv function which is represented with purple color dotted line starting from point and the numerically approximated lower bounds of structured singular values with [58] represented with turquoise color dotted line starting from point are strictly bounded by upper bounds (numerically approximated) of structured singular values with MATLAB function mussv.
Figure 2.
Behavior of eigenvalues, singular values, structured singular values, and condition numbers.
7. Quasi-Rational Bernstein–Vandermonde Matrices
The following result in [60] is the computation of the determinant of quasi-rational Bernstein–Vandermonde structured matrices.
Theorem 7
([60]). Let be a quasi-rational Bernstein–Vandermonde matrix. Then, the determinant is computed as
In addition,
and
Proof.
For proof, see [60]. □
The parametric matrix for a quasi-rational Bernstein–Vandermonde matrix is given by the following theorem.
Theorem 8.
Let is a non-singular quasi-rational Bernstein–Vandermonde matrix. The parametric matrix is the following matrix
,
and
Spectral Properties of Quasi-Rational Bernstein–Vandermonde Matrices
In this subsection, we aim to present important and meaningful spectral properties of quasi-rational Bernstein–Vandermonde matrices. These matrices are taken from the paper [60] for the numerical approximations of structured singular values.
We make use of the well-known MATLAB functions and to approximate numerically both eigenvalues and singular values. Our main objective is to approximate numerically the lower bounds of structured singular value or -value, which is a straightforward generalization of singular values for constant structured matrices. Furthermore, we make use of MATLAB functions mussv to compute both lower and upper bounds of structured singular values or -values for constant structured matrices numerically.
Example 4.
Consider a quasi-rational Bernstein–Vandermonde matrix
The computation of both eigenvalues and singular values are obtained as and respectively. The first and second columns represent numerically approximated upper and lower bounds of structured singular values or -values via MATLAB function mussv. The approximation (numerical) lower bounds of structured singular values or -values with algorithm [58] are represented in the very last column of Table 5.
In Figure 3, the left-hand side subfigure represents the plots of eigenvalues, singular values, and numerically approximated bounds (from below and above) of structured singular values or -values against the time t. In Figure 3, the blue color dotted line starting from point at the bottom of the left side figure denotes the spectrum, that is, the eigenvalues of . Because singular values are non-negative numbers, the red color dotted line starting from point indicates that numerically approximated eigenvalues are bounded from above by means of singular values. The golden color dotted line starting from point shows that all quantities, that is, eigenvalues, singular values, and lower bounds of structured singular values or -values approximated by MATLAB mussv function, which is represented with the purple color dotted line starting from point and the numerically approximated lower bounds of structured singular values or -values via algorithm [58] represented with a turquoise color dotted line starting from point are strictly bounded by upper bounds (computed numerically) of structured singular values with MATLAB function mussv.
Figure 3.
Behavior of eigenvalues, singular values, structured singular values, and condition numbers.
8. Numerical Testing for Matrices in Higher Dimensions
In this section, we aim to present the comparison of the numerically approximated bounds of structured singular values for Bernstein-Vandermonde structured matrices in higher dimensions. For numerical testing, we choose the Bernstein-Vandermonde matrices having sizes , respectively.
The very first column in Table 6 denoted the size of square Bernstein–Vandermonde structured matrices, which are under consideration in this article. Both the second and third columns indicate the numerical approximation to both upper and lower bounds of structured singular values or -values with the help of MATLAB function mussv, respectively. The last and fourth column of Table 6 shows the numerical approximation of the bounds (from below) of structured singular value or -value computed via algorithm [58]. For the size 10, the lower bound of structured singular value approximated numerically via MATLAB mussv function is much better. However, for the sizes , the lower bounds approximated by [58] are significantly better than the lower bounds approximated by MATLAB function mussv.
Algorithm 1 Approximate perturbation level. |
|
9. Conclusions
In this article, we have presented and analyzed the numerical approximation of a mathematical tool commonly known as structured singular value for a class of totally positive Bernstein–Vandermonde structured matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices, and quasi-rational Bernstein–Vandermonde structured matrices. The numerical computation of both eigenvalues and singular values for such a class of structured matrices is presented and investigated with MATLAB functions and . The new contribution in this article is to make a comparison for the numerical approximation of lower bounds of structured singular value compared with MATLAB function available in MATLAB Control Toolbox.
Author Contributions
M.-U.R. introduced the problem, and J.A. and N.F. validated the results. M.-U.R. and T.H.R. wrote the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
No datasets were generated or analyzed during the current study.
Acknowledgments
J. Alzabut and N. Fatima are thankful to Prince Sultan University for its endless support. J. Alzabut is appreciative of OSTIM Technical University for unwavering assistance.
Conflicts of Interest
The authors declare no conflict of interest.
References
- James, D.; Ioana, D.; Olga, H.; Plamen, K. Accurate and Efficient Expression Evaluation and Linear Algebra; Acta Numerica; Cambridge University Press: Cambridge, UK, 2008; Volume 10, pp. 87–145. [Google Scholar]
- Fallat, S.M.; Johnson, C.R. Totally Nonnegative Matrices; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
- Ando, T. Totally Positive Matrices; Cambridge University Press: Cambridge, UK, 2010; p. 181. [Google Scholar]
- Andalib, T.W.; Azizan, N.A.; Halim, H.A. Case Matrices and Connections of Entrepreneurial Career Management Module. Int. J. Entrep. 2019, 23, 1–10. [Google Scholar]
- James, D.; Plamen, K. The accurate and efficient solution of a totally positive generalized Vandermonde linear system. SIAM J. Matrix Anal. Appl. 2005, 27, 142–152. [Google Scholar]
- Koev, P. Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2005, 27, 1–23. [Google Scholar] [CrossRef]
- Koev, P. Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2007, 29, 731–751. [Google Scholar] [CrossRef]
- Marco, A.; Martı, J.J. A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Its Appl. 2007, 4, 616–628. [Google Scholar] [CrossRef]
- Marco, A.; Martı, J.J. Polynomial least squares fitting in the Bernstein basis. Linear Algebra Its Appl. 2010, 433, 1254–1264. [Google Scholar] [CrossRef]
- Farin, G.E.; Farin, G. Curves and Surfaces for CAGD: A Practical Guide; Morgan Kaufmann: Burlington, MA, USA, 2002. [Google Scholar]
- Farin, G. Curves and Surfaces for Computer-Aided Geometric Design: A Practical Guide; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
- Farin, G.; Hamann, B. Current trends in geometric modeling and selected computational applications. J. Comput. Phys. 1997, 138, 1–15. [Google Scholar] [CrossRef]
- Farin, G.E.; Hansford, D. The Geometry Toolbox for Graphics and Modeling; AK Peters/CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
- Forrest, A.R. Interactive interpolation and approximation by Bézier polynomials. Comput. J. 1972, 15, 71–79. [Google Scholar] [CrossRef]
- Wolters, H.J.; Farin, G. Geometric curve approximation. Comput. Aided Geom. Des. 1997, 14, 499–513. [Google Scholar] [CrossRef]
- Bini, D.A.; Gemignani, L. Bernstein-bezoutian matrices. Theor. Comput. Sci. 2004, 315, 319–333. [Google Scholar] [CrossRef]
- Brown, W.S. On Euclid’s algorithm and the computation of polynomial greatest common divisors. J. ACM 1971, 18, 478–504. [Google Scholar] [CrossRef]
- Collins, G.E. Subresultants and reduced polynomial remainder sequences. J. ACM 1967, 14, 128–142. [Google Scholar] [CrossRef]
- Bini, D.A.; Gemignani, L. Fast fraction-free triangularization of Bezoutians with applications to sub-resultant chain computation. Linear Algebra Its Appl. 1998, 284, 19–39. [Google Scholar] [CrossRef]
- Barnett, S. A Bezoutian Matrix for Chebyshev Polynomials; University of Bradford, School of Mathematical Sciences: Bradford, UK, 1987. [Google Scholar]
- Gemignani, L. Fast and Stable Computation of the Barycentric Representation of Rational Interpolants. Calcolo 1996, 33, 371–388. [Google Scholar] [CrossRef]
- Gohberg, I.; Olshevsky, V. Fast inversion of Chebyshev-Vandermonde matrices. Numer. Math. 1994, 67, 71–92. [Google Scholar] [CrossRef]
- Kailath, T.; Olshevsky, V. Displacement-structure approach to polynomial Vandermonde and related matrices. Linear Algebra Its Appl. 1997, 261, 49–90. [Google Scholar] [CrossRef]
- Rost, K. Generalized companion matrices and matrix representations for generalized Bezoutians. Linear Algebra Its Appl. 1993, 193, 151–172. [Google Scholar] [CrossRef][Green Version]
- Pan, V. Structured Matrices and Polynomials: Unified Superfast Algorithms; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
- Phillips, G.M. Interpolation and Approximation by Polynomials; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Junghanns, P.; Oestreich, D. Numerische Lösung des Staudammproblems mit Drainage. Z. Angew. Math. Mech. 1989, 69, 83–92. [Google Scholar] [CrossRef]
- Weideman, J.A.C.; Laurie, D.P. Quadrature rules based on partial fraction expansions. Numer. Algorithms 2000, 24, 159–178. [Google Scholar] [CrossRef]
- Dailey, M.; Dopico, F.M.; Ye, Q. Relative perturbation theory for diagonally dominant matrices. SIAM J. Matrix Anal. Appl. 2014, 35, 1303–1328. [Google Scholar] [CrossRef]
- Dailey, M.; Dopico, F.M.; Ye, Q. A new perturbation bound for the LDU factorization of diagonally dominant matrices. SIAM J. Matrix Anal. Appl. 2014, 35, 904–930. [Google Scholar] [CrossRef][Green Version]
- Demmel, J.; Kahan, W. Accurate singular values of bidiagonal matrices. SIAM J. Sci. Stat. Comput. 1990, 11, 873–912. [Google Scholar] [CrossRef]
- Demmel, J.; Gu, M.; Eisenstat, S.; Slapničar, I.; Veselić, K.; Drmač, Z. Computing the singular value decomposition with high relative accuracy. Linear Algebra Its Appl. 1999, 299, 21–80. [Google Scholar] [CrossRef]
- Demmel, J.; Koev, P. Accurate SVDs of weakly diagonally dominant M-matrices. Numer. Math. 2004, 98, 99–104. [Google Scholar] [CrossRef]
- Dopico, F.M.; Koev, P. Accurate symmetric rank revealing and eigendecompositions of symmetric structured matrices. SIAM J. Matrix Anal. Appl. 2006, 28, 1126–1156. [Google Scholar] [CrossRef]
- Dopico, F.; Koev, P. Perturbation theory for the LDU factorization and accurate computations for diagonally dominant matrices. Numer. Math. 2011, 119, 337–371. [Google Scholar] [CrossRef]
- Arif, M.S.; Abodayeh, K.; Nawaz, Y. Numerical Schemes for Fractional Energy Balance Model of Climate Change with Diffusion Effects. Emerg. Sci. J. 2023, 7, 808–820. [Google Scholar] [CrossRef]
- Boros, T.; Kailath, T.; Olshevsky, V. Fast algorithms for solving Vandermonde and Chebyshev-Vandermonde systems. Stanf. Inf. Syst. Lab. Rep. 1994. [Google Scholar]
- Higham, N.J. Fast solution of Vandermonde-like systems involving orthogonal polynomials. IMA J. Numer. Anal. 1988, 8, 473–486. [Google Scholar] [CrossRef]
- Higham, N.J. Stability analysis of algorithms for solving confluent Vandermonde-like systems. SIAM J. Matrix Anal. Appl. 1990, 11, 23–41. [Google Scholar] [CrossRef][Green Version]
- Kailath, T.; Olshevsky, V. Displacement structure approach to Chebyshev-Vandermonde and related matrices. Integral Equ. Oper. Theory 1995, 22, 65–92. [Google Scholar] [CrossRef]
- Reichel, L.; Opfer, G. Chebyshev-vandermonde systems. Math. Comput. 1991, 57, 703–721. [Google Scholar] [CrossRef]
- Verde-Star, L. Inverses of generalized Vandermonde matrices. J. Math. Anal. Appl. 1988, 131, 341–353. [Google Scholar] [CrossRef]
- Marco, A.; Martínez, J.-J.; Peña, J.M. Accurate bidiagonal decomposition of totally positive Cauchy–Vandermonde matrices and applications. Linear Algebra Its Appl. 2017, 517, 63–84. [Google Scholar] [CrossRef]
- Huang, R.; Chu, D. Relative perturbation analysis for eigenvalues and singular values of totally nonpositive matrices. SIAM J. Matrix Anal. Appl. 2015, 36, 476–495. [Google Scholar] [CrossRef]
- Huang, R.; Chu, D. Computing singular value decompositions of parameterized matrices with total nonpositivity to high relative accuracy. J. Sci. Comput. 2017, 71, 682–711. [Google Scholar] [CrossRef]
- Dopico, F.M.; Molera, J.M.; Moro, J. n orthogonal high relative accuracy algorithm for the symmetric eigenproblem. SIAM J. Matrix Anal. Appl. 2003, 25, 301–351. [Google Scholar] [CrossRef]
- Dopico, F.M.; Koev, P.; Molera, J.M. Implicit standard Jacobi gives high relative accuracy. Numer. Math. 2009, 113, 519–553. [Google Scholar] [CrossRef]
- Huang, R. Rank structure properties of rectangular matrices admitting bidiagonal-type factorizations. Linear Algebra Its Appl. 2015, 465, 1–14. [Google Scholar] [CrossRef]
- Yang, Z.; Huang, R.; Zhu, W.; Liu, J. Accurate solutions of structured generalized Kronecker product linear systems. Numer. Algorithms 2021, 87, 797–818. [Google Scholar] [CrossRef]
- Alfa, A.; Xue, J.; Ye, Q. Accurate computation of the smallest eigenvalue of a diagonally dominant M-matrix. Math. Comput. 2002, 71, 217–236. [Google Scholar] [CrossRef]
- Ye, Q. Computing singular values of diagonally dominant matrices to high relative accuracy. Math. Comput. 2008, 77, 2195–2230. [Google Scholar] [CrossRef]
- Huang, R. A test and bidiagonal factorization for certain sign regular matrices. Linear Algebra Its Appl. 2013, 438, 1240–1251. [Google Scholar] [CrossRef]
- Yang, Z.; Ma, X.-X. Computing eigenvalues of quasi-rational Bernstein-Vandermonde matrices to high relative accuracy. Numer. Linear Algebra Appl. 2022, 29, 2421. [Google Scholar] [CrossRef]
- Ameer, E.; Nazam, M.; Aydi, H.; Arshad, M.; Mlaiki, N. On (Λ, Y, R)-contractions and applications to nonlinear matrix equations. Mathematics 2019, 7, 443. [Google Scholar] [CrossRef]
- Riazat, M.; Azizi, A.; Naderi Soorki, M.; Koochakzadeh, A. Robust Consensus in a Class of Fractional-Order Multi-Agent Systems with Interval Uncertainties Using the Existence Condition of Hermitian Matrices. Axioms 2023, 12, 65. [Google Scholar] [CrossRef]
- Ali, I.; Saleem, M.T. Applications of Orthogonal Polynomials in Simulations of Mass Transfer Diffusion Equation Arising in Food Engineering. Symmetry 2023, 15, 527. [Google Scholar] [CrossRef]
- Pan, V.Y.; Tsigaridas, E.P. Nearly optimal computations with structured matrices. In Proceedings of the 2014 Symposium on Symbolic-Numeric Computation, Shanghai, China, 28–31 July 2014; pp. 21–30. [Google Scholar]
- Guglielmi, N.; Rehman, M.-U.; Kressner, D. A novel iterative method to approximate structured singular values. SIAM J. Matrix Anal. Appl. 2017, 38, 361–386. [Google Scholar] [CrossRef]
- Marco, A.; Martínez, J.-J. Accurate computations with totally positive Bernstein-Vandermonde matrices. Electron. J. Linear Algebra 2013, 26, 357–380. [Google Scholar] [CrossRef]
- Yang, Z.; Huang, R.; Zhu, W. Accurate computations for eigenvalues of products of Cauchy-polynomial-Vandermonde matrices. Numer. Algorithms 2020, 85, 329–351. [Google Scholar] [CrossRef]
- Braatz, R.P.; Young, P.M.; Doyle, J.C.; Morari, M. Computational complexity of μ calculation. IEEE Trans. Autom. Control. 1994, 39, 1000–1002. [Google Scholar] [CrossRef]
- Levin, D. The approximation power of moving least-squares. Math. Comput. 1998, 67, 1517–1531. [Google Scholar] [CrossRef]
- Jabbari, F. Linear System Theory II, Chapter 3: Eigenvalue, Singular Values, Pseudoinverse; The Henry Samueli School of Engineering, University of California: Irvine, CA, USA, 2015; Available online: http://gram.eng.uci.edu/fjabbari/me270b/me270b.html (accessed on 20 August 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).