1. Introduction
In numerical linear algebra, the design of some accurate as well as efficient numerical algorithms for a class of structured matrices remains an interesting topic in research in recent years. Among these structured matrices, an interesting class of matrices is totally positive matrices. The several classes of very structured matrices are studied in [
1,
2,
3,
4]. The important and interesting literature in the listed articles covers many aspects of both theory and application but does not contain topics such as accuracy and efficient numerical computations with such kinds of structured matrices. For more details on the computational aspects of structured totally positive matrices can be found in the work [
5,
6,
7].
We consider a special class of totally positive structured matrices that are deeply studied and analyzed in [
8] for solving a system of linear equations: Bernstein–Vandermonde matrices. Such a class of matrices has been used to solve and analyze least squares fitting problems while considering the Bernstein basis [
9]. The Bernstein–Vandermonde structured matrices are the straightforward generalization of the Vandermonde structured matrices when the most suitable choice of Bernstein basis is considered rather than monomial basis corresponding to a space spanned by algebraic polynomials with degree bounded above by
n. The Bernstein polynomials were originally discovered about a hundred years ago by Sergi Natanovich Bernstein in order to facilitate the most famous proof of the Weierstrass approximation theorem. The work by Bezier and de Casteljau introduces the Bernstein polynomials in computer-aided geometric design; see [
10] for more details.
The numerical methods based upon Bezier curves are more popular in computer-aided geometric design (CAGD), see [
11,
12,
13,
14,
15]. The Bernstein polynomials parameterize the Bezier curves reduce to Bernstein polynomial basis.
The theoretical basis to design fast and accurate algorithms in order to compute the greatest
common divisor for the real type of polynomials and
with degree of at most
n and which are expressible in Bernstein polynomial basis
, where
, are studied in [
16]. The fast
algorithms to determine the required power form of the polynomials
and
are studied in [
17,
18] or its matrix counterparts [
19] for the evaluation of GCD.
In [
16], the Bezout form
for polynomials
and
is defined by the expression of the form
Furthermore, the Bezoution matrices, while considering different polynomial bases, are studied by various authors, see [
20,
21,
22,
23,
24].
The structured matrices, particularly both Vandermonde matrices and Cauchy matrices, appear in the vast areas of computation, see [
25,
26]. The Cauchy–Vandermonde structured matrices act as useful tools to study the numerical approximation of solution corresponding to singular integral equations; for more detail, we refer [
27]. Such a type of structured matrices does occur in connection with numerical approximation of solutions of problems related to study quadrature [
28]. In fact, Cauchy–Vandermonde matrices are ill-conditional matrices. The high accuracy of numerical approximation has been achieved for such a class of structured matrices while very carefully studying their specific structure properties [
1,
29,
30,
31,
32,
33,
34,
35,
36].
The Vandermonde matrices appear during the study of interpolation problems in order to exploit the monomial basis [
25]. The polynomial-Vandermonde matrices appear when the polynomial basis is considered rather than the monomial basis, and such matrices help to study many applications. A few included in the list are approximation, interpolation, and Gaussian quadrature [
37,
38,
39,
40,
41,
42].
An extensive amount of research has been done in order to analyze the high accuracy of numerical approximations for many classes of matrices having specific structures. This does includes a class of structured matrices includes: totally positive and totally negative matrices [
6,
43], totally non-positive matrices [
44,
45], matrices having rank-revealing decomposition [
46,
47], rank structured matrices [
48,
49], the diagonally dominant structured
M-matrices [
50,
51] and the structured sign regular matrices [
7,
52]. The numerical approximation of eigenvalues for structured quasi-rational Bernstein–Vandermonde matrices up to high accuracy (relative) are studied in much greater detail in [
53,
54].
An extensive amount of work has been done in the direction of discussing both necessary and sufficient criteria for swarm stability asymptotically. The system under consideration achieves the asymptotically if and only if there exists Hermitian matrices satisfying complex Lyapunov inequality for all of the system vertex matrices [
55].
The symmetry and asymmetry properties of orthogonal polynomials play a key role in solving system of differential equations that appear in mathematical modeling corresponding to real-world problems. The classical orthogonal polynomials, for instance, Hermit, Legendre, Laguerre, and Discrete, including Krawtchouk and Chebyshev, have numerous widespread applications across many very important branches of science and engineering. In [
56], Chebyshev polynomials are used to discuss the simulation of a two-dimensional mass transfer equation subject to Robin and Neumann boundary conditions.
The Boolean complexity for the multiplication of structured matrices by a vector and the solution of nonsingular linear systems of equations with a class of such matrices is studied in [
57]. The main focus is to study four basic and most popular classes, that is, Toeplitz, Hankel, Cauchy, and Vandermonde matrices, for which the cited computational problems are equivalent to the task of polynomial multiplication and division and polynomial and rational multipoint evaluation and interpolation.
In this article, we present the spectral properties of a class of structured matrices. We study the behavior of eigenvalues, singular values, and structured singular values for a class of structured matrices, that is, totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomial-Vandermonde structured matrices and quasi-rational Bernstein–Vandermonde structured matrices. Furthermore, we also present the numerical approximation of conditioned numbers for structured matrices considered in the current study. Our proposed approach in a recent study differs from the methodology [
58] where a low-rank ODE-based technique was developed to study the stability and instability analysis of linear time-invariant system appearing in control and was mainly based on a two level-algorithm, that is, inner-outer algorithm.
The key contribution of this paper is to study and investigate the spectral properties (particularly the computation of structured singular) of totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian matrices, Cauchy—polynomial-Vandermonde matrices, and structured quasi-rational Bernstein–Vandermonde matrices and this act as the novel contribution to this paper.
In
Section 2 of this article, we aim to present the definitions of structured totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian matrices, Cauchy—polynomial-Vandermonde matrices and quasi-rational Bernstein–Vandermonde matrices.
We give a brief and concise introduction to the numerical approximation of the structured singular values in
Section 3.
Section 4 contains the main results on the numerical computation of the largest and the smallest singular values. Furthermore, the exact behavior of the largest and smallest singular values is also discussed. In
Section 5, we present numerical experimentation for Bernstein–Vandermonde and Bernstein–Bezoutian matrices. The numerical approximation of eigenvalues and both singular values and structured singular values are also analyzed and presented. The numerical experimentation for spectral quantities of Cauchy-polynomial-Vandermonde structured matrices and quasi-rational Bernstein–Vandermonde structured matrices are presented in
Section 6 and
Section 7, respectively.
Section 8 contains the numerical testing for the comparison between the approximated lower bounds of structured singular values for a class of higher dimensional structured Bernstein–Vandermonde matrices. Finally, in Section, we present concluding remarks.
4. Main Results
In this section, we aim to present our new and main results concerning the numerical approximation of the largest singular value of . Furthermore, we also discuss the increasing behavior of and the decreasing behavior of . The following Theorem 1 allows the computation of .
Definition 10. For a given matrix , the scalars are called the eigenvalues of A such that where I denotes an identity matrix possessing the same dimension as of the matrix A.
Definition 11. For a given matrix , the non-negative numbers are known as the singular values of A if A can be decomposed as , where the matrices U and V are the orthogonal and Σ is a diagonal matrix having on its main diagonal.
Lemma 1. Let be matrix family (smooth). Let denotes the eigenvalue of , that converges to an eigenvalue (simple) of as . Then, the continuous branch of eigenvalues is analytic nearby havingHere, and and denotes left and right hands singular vectors of corresponding to Theorem 1. For a given , , the largest singular value is obtained aswhereand Proof. First, we show that the above-given expressions for
and
are valid. We consider the following factorization of
A. For the following factorization of
A, we refer interested readers to see [
62].
Here,
D denotes a diagonal non-negative matrix while
E is of the full-rank.
For the quantity
, which is the numerator of
, we make use of arithmetic-geometric-mean inequality that allows us to write the following inequality for singular values of
D.
Next, we make use of arithmetic-geometric-mean inequality on the quantity
, which yields
Equations (
12) and (
13) allows us to write
Finally, from Equation (
14), we have
For the quantity
which is the denominator of
, we make use of arithmetic-geometric-mean inequality for the singular values of
D as,
In addition,
The inequalities in Equations (
15) and (
16) yields
Because we know that the matrix 2-norm of matrix
D can be written as
Therefore, Equations (
17) and (
18) implies that
or
In a similar way, we can obtain the expressions for the numerator and denominator of
. Now, we aim to prove that
.
Since,
and
. Thus, the matrix
D takes the form of
and
. We obtain the following expression while making use of the singular value decomposition of
yields
From [
63], the
can be written as
with
In a similar way from [
63],
takes the following form as, that is,
with
.
The singular value decomposition of
yields
where
As,
and
□
The increasing behavior of for is given in Theorem 2. Furthermore, and . For simplicity, we omit the dependency of and on t in Theorem 2.
Theorem 2. Let are submatrices of , and let be the largest singular values of and denotes the largest singular values of . The largest singular values of satisfies the inequality Proof. For
, the submatrices
and matrices
can be written as
and
In Equations (
19) and (
20),
denotes all the
k components of sub-matrix
and
denotes all components of
for
. Let
and
denotes the left and right hand sides singular vectors of
, then
From Equation (
21), we have
In Equation (
22),
, and
denote the right and left-hand sides singular vectors corresponding to a family of matrices
, respectively. From Equations (
21) and (
22), we have
□
The decreasing behavior of for is given in Theorem 3.
Theorem 3. Let are sub-matrices of , and let be the smallest singular values of and are the smallest singular values of . The smallest singular vales satisfies the inequality Proof. For
, the matrices
and
can be written as
and
Let
and
be left hand and right hand singular vectors of
, then
Now,
Since,
and
are the smallest singular values of
and
for
, thus Equations (
24) and (
25) yields
□
Next, we aim to fix the largest singular value
for
such that
. For this purpose, we make use of an inner-outer algorithm. The main objective is to develop and then solve an optimization problem. In turn, this optimization problem yields a system of ordinary differential equations (ODEs). On the other hand, for the case of the outer algorithm, our main aim is to modify the perturbation level
via fast Newton’s iteration. For more details, we refer [
58].
7. Quasi-Rational Bernstein–Vandermonde Matrices
The following result in [
60] is the computation of the determinant of quasi-rational Bernstein–Vandermonde structured matrices.
Theorem 7 ([
60]).
Let be a quasi-rational Bernstein–Vandermonde matrix. Then, the determinant is computed asIn addition,
and
The parametric matrix for a quasi-rational Bernstein–Vandermonde matrix is given by the following theorem.
Theorem 8. Let is a non-singular quasi-rational Bernstein–Vandermonde matrix. The parametric matrix is the following matrix
,
and
Spectral Properties of Quasi-Rational Bernstein–Vandermonde Matrices
In this subsection, we aim to present important and meaningful spectral properties of quasi-rational Bernstein–Vandermonde matrices. These matrices are taken from the paper [
60] for the numerical approximations of structured singular values.
We make use of the well-known MATLAB functions and to approximate numerically both eigenvalues and singular values. Our main objective is to approximate numerically the lower bounds of structured singular value or -value, which is a straightforward generalization of singular values for constant structured matrices. Furthermore, we make use of MATLAB functions mussv to compute both lower and upper bounds of structured singular values or -values for constant structured matrices numerically.
Example 4. Consider a
quasi-rational Bernstein–Vandermonde matrix
The computation of both eigenvalues and singular values are obtained as
and
respectively. The first and second columns represent numerically approximated upper and lower bounds of structured singular values or
-values via MATLAB function mussv. The approximation (numerical) lower bounds of structured singular values or
-values with algorithm [
58] are represented in the very last column of
Table 5.
In
Figure 3, the left-hand side subfigure represents the plots of eigenvalues, singular values, and numerically approximated bounds (from below and above) of structured singular values or
-values against the time
t. In
Figure 3, the blue color dotted line starting from point
at the bottom of the left side figure denotes the spectrum, that is, the eigenvalues of
. Because singular values are non-negative numbers, the red color dotted line starting from point
indicates that numerically approximated eigenvalues are bounded from above by means of singular values. The golden color dotted line starting from point
shows that all quantities, that is, eigenvalues, singular values, and lower bounds of structured singular values or
-values approximated by MATLAB mussv function, which is represented with the purple color dotted line starting from point
and the numerically approximated lower bounds of structured singular values or
-values via algorithm [
58] represented with a turquoise color dotted line starting from point
are strictly bounded by upper bounds (computed numerically) of structured singular values with MATLAB function mussv.
In
Figure 3, the right-hand side subfigure represents the plots of condition numbers vs. time. Furthermore, the behaviour of spectral condition numbers
having end point
,
having end point
and
having end point
is shown in right hand side figure of
Figure 3.
8. Numerical Testing for Matrices in Higher Dimensions
In this section, we aim to present the comparison of the numerically approximated bounds of structured singular values for Bernstein-Vandermonde structured matrices in higher dimensions. For numerical testing, we choose the Bernstein-Vandermonde matrices having sizes , respectively.
The very first column in
Table 6 denoted the size of square Bernstein–Vandermonde structured matrices, which are under consideration in this article. Both the second and third columns indicate the numerical approximation to both upper and lower bounds of structured singular values or
-values with the help of MATLAB function mussv, respectively. The last and fourth column of
Table 6 shows the numerical approximation of the bounds (from below) of structured singular value or
-value computed via algorithm [
58]. For the size 10, the lower bound of structured singular value approximated numerically via MATLAB mussv function is much better. However, for the sizes
, the lower bounds approximated by [
58] are significantly better than the lower bounds approximated by MATLAB function mussv.
Algorithm 1 Approximate perturbation level. |
procedure Given A, ,THE TOLERANCE , (BOUND FROM BELOW), (BOUND FROM ABOVE), (GIVEN UPPER BOUND), (INITIAL OF EIGENVALUES) imax Determine solution to system of ODEs (4.10) in [ 58] for all cases that begin with the initial choice of the value of . The quantity denotes solution (of stationary nature) and let denotes smallest eigenvalue corresponding to modified perturbed structured matrix Take Take , , , the computed eigenvectors Determine via a single step fast Newton iteration Set While Determine solution to ODEs (4.10) in [ 58] with begins from Consider , solution (stationary) of (4.10) in [ 58] Consider , the smallest eigenvalue of modified structured matrix if Set perturbation level Determine suitable value of with one step fast Newton iteration. end procedure
|