Next Article in Journal
Mixed Generalized Multiscale Finite Element Method for Darcy-Forchheimer Model
Previous Article in Journal
The Dirichlet Problem of the Constant Mean Curvature in Equation in Lorentz-Minkowski Space and in Euclidean Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Stability of Non-Autonomous Systems and a Generalization of Levinson’s Theorem

Department of Mathematics, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Korea
Mathematics 2019, 7(12), 1213; https://doi.org/10.3390/math7121213
Submission received: 13 October 2019 / Revised: 29 November 2019 / Accepted: 4 December 2019 / Published: 10 December 2019

Abstract

:
We study the asymptotic stability of non-autonomous linear systems with time dependent coefficient matrices { A ( t ) } t R . The classical theorem of Levinson has been an indispensable tool for the study of the asymptotic stability of non-autonomous linear systems. Contrary to constant coefficient system, having all eigenvalues in the left half complex plane does not imply asymptotic stability of the zero solution. Levinson’s theorem assumes that the coefficient matrix is a suitable perturbation of the diagonal matrix. Our objective is to prove a theorem similar to Levinson’s Theorem when the family of matrices merely admits an upper triangular factorization. In fact, in the presence of defective eigenvalues, Levinson’s Theorem does not apply. In our paper, we first investigate the asymptotic behavior of upper triangular systems and use the fixed point theory to draw a few conclusions. Unless stated otherwise, we aim to understand asymptotic behavior dimension by dimension, working with upper triangular with internal blocks adds flexibility to the analysis.
MSC:
34D09; 34D10; 34E10

1. Introduction

We study the asymptotic stability of non-autonomous linear systems of ordinary differential equations
x ( t ) = A ( t ) x ( t ) ,
where x ( t ) R N and A ( t ) R N × N for each t R . Non-autonomous linear systems typically arise when linearizing a non-linear dynamical system along a particular solution of interest that is not necessarily a fixed point. In this case, the asymptotic stability of a linearized non-autonomous system is the linear stability of the solution of interest. In the case of a fixed point, one ends up with a constant coefficient linear system.
Our objective is to generalize the theorem of Levinson [1]. The theorem can be found in many other places (e.g., [2]) in slightly different forms. For the reader’s convenience, we include the theorem in the Appendix A. This classical theorem has been an indispensable tool in many science and engineering problems, giving the right asymptotic stability of a given system. It ha also been generalized in many different ways by the authors of [3,4,5,6,7]. The exposition in [8] conveys well the historical context and the recent development of the theory as well. We first describe the rudiments of the problem, introduce Levinson’s Theorem, and state which aspects of the theorem we still want to generalize.
Surprisingly, the asymptotic stability of a non-autonomous system is quite different from that of a constant coefficient system. For a constant coefficient system, the Jordan factorization of the coefficient matrix, which any matrix admits, tells the complete story of the asymptotic behavior. Each of the eigenvalues and the corresponding invariant subspaces are revealed by factorization. Any solution, in its modulus, exhibits one of three asymptotic behaviors: orbits can asymptotically grow at an exponential rate; orbits can decay to 0 at an exponential rate; or orbits can be maintained at polynomial rate. The sign of the real part of the eigenvalue decides the behavior. It could be expected that for a non-autonomous system the eigenvalues λ i ( t ) for i = 1 , N of A ( t ) could tell the story as well, or at least part of the story, but it is well known that this is not the case.
Markus and Yamabe [9] presented an example of a non-autonomous system, where the coefficient matrix has eigenvalues with consistently negative real parts while the solution grows exponentially:
x y = 1 4 1 1 1 4 + 3 4 cos 2 t sin 2 t sin 2 t cos 2 t x y .
Its eigenvalues are 1 4 ± 7 4 i regardless of t, but cos t sin t exp ( t / 2 ) solves the equation.
In light of the above example, the question arises: About what sort of family { A ( t ) } t R can we draw a conclusion? There are cases where the eigenvalues reveal asymptotic stability. Consider the simplest case of a family of diagonal coefficient matrices, A ( t ) = d i a g ( λ 1 ( t ) , , λ 2 ( t ) ) . In this case, the system is decoupled and the behavior can be read off from the eigenvalues:
x j ( t ) = x j ( t 0 ) exp t 0 t λ j ( η ) d η j = 1 , , N .
Now, we introduce the classical theorem of Levinson. With the assumption that A ( t ) does admit a differentiable factorization, such as A ( t ) = S ( t ) U ( t ) S ( t ) 1 , the change of variable
y ( t ) = S ( t ) 1 x ( t )
leads to the system
y ( t ) = U ( t ) y ( t ) S ( t ) 1 S ( t ) y ( t ) .
The point of view is that S ( t ) 1 S ( t ) = : E ( t ) is a perturbation of U ( t ) . The asymptotic behavior with this perturbation is still in question for we have the counter example of Markus–Yamabe.
First, consider the case U ( t ) = d i a g ( λ 1 ( t ) , , λ 2 ( t ) ) . From the statement of Levinson’s Theorem (Theorem A1), we note that the theorem assumes two main conditions: One is the smallness of E ( t ) , which is presented in such a way that the integral t 0 E ( t ) d t is finite; the other is the spectral gap conditions. Under these assumptions, Levinson’s Theorem states that the asymptotic behavior of a diagonal system persists under small perturbations.
The objective of this paper is to generalize Levinson’s Theorem allowing U ( t ) upper triangular as considered above. Neither the diagonalization nor the Jordan factorization is continuous even if A ( t ) is smooth in general. We believe that this generalization is theoretically interesting as well as practically useful.
The generalization is theoretically interesting because the applications of Levinson’s Theorem become critical when the two eigenvalues become identical in the limit t . (Of course, it must not be defective for the theorem applicable.) To illustrate the problem, let us take an example of the 2 × 2 diagonal matrix Λ ( t ) = d i a g λ 1 ( t ) , λ 2 ( t ) with λ 1 ( t ) = 0 and λ 2 ( t ) = 1 t . We see that the spectral gap vanishes in the limit t . The gap conditions of Levinson’s Theorem demand so little that this example fulfills the condition. As the theorem holds, even with perturbations, the two respective asymptotic rates are to be discerned. Integrating the equations, we see that the two rates are, respectively, 1 and t, and Levinson’s Theorem states that these fine discerning rates are persistent. Readers who are familiar with the invariant manifold theory of dynamical systems might find that this phenomenon is not typical. We find this aspect powerful and suggestive at the same time, because this feature manifests exactly when the eigenvalues become identical in the limit, or when there are chances of having a defective eigenvalue. In turn, possibly the theorem is not applicable.
The generalization is practically useful because many engineering problems involve many dimensions. When there are several dimensions, having even only one defective eigenvalue violates the prerequisite of Levinson’s Theorem, as diagonalization fails. We can therefore draw no conclusions, which we think not sharp. Our results can be useful if one is only interested in the partial asymptotic behaviors corresponding to a certain block.
The generalizations where the matrix under perturbations is a block diagonal can also be found in the literature. For instance, a persistence theorem on a matrix with a single Jordan block can be found in [5]. Devinatz and Kaplan [10] presented a generalization when eigenvalues become defective in the limit. It is remarkable that they also worked out to give the Lemma showing that under suitable conditions there is an upper triangular factorization that converges to a Jordan factorization in the limit. Theorem 6.6 in [8] and Theorem 4 in [11] (revised in the form of Theorem 6.8 in [8]) are other persistence theorems on a matrix with multiple Jordan blocks. The results in [8,10] are restricted on matrices with Jordan blocks, which makes their results so explicit, but their arguments seems not heavily dependent on having the exact Jordan form. Our result can be easier to apply since we do not check if the matrix is in a certain specific form such as Jordan form, but it is still enough to characterize the asymptotic rate of the corresponding block. Our result is also useful when the continuity of the Jordan factorization is not assured.
We lastly comment on the subject of simultaneous factorization. If { A ( t ) } t R is a smooth one-parameter family of matrices, the possibility of simultaneous factorization is a substantial subject in matrix theory. As stated above, neither the diagonalization nor the Jordan form factorization is continuous. The continuity of Schur factorization under suitable conditions is shown in [12]. It is our working hypothesis that { A ( t ) } t R admits factorization A ( t ) = S ( t ) U ( t ) S ( t ) 1 smoothly in t for U ( t ) upper triangluar.
The paper is organized in the following way. In Section 2, we present the preliminaries necessary for stating the problem. In Section 3, we present the main results: first, we present the stability of the system with an upper triangular coefficient matrix, which is the unperturbed system. Then, we provide a persistence theorem.

2. Preliminaries

The purpose of this section is to establish notation and introduce a few aspects in ordinary differential equation theories that are necessary in stating our problem. Readers who are familiar with those basic notions may want to move on to the next section.
For a vector x R N , | x | : = max i = 1 , , N | x i | . If A is a N × N matrix, A denotes the operator norm with respect to the vector norm, A : = max | x | 1 | A x | | x | . If x ( t ) is an orbit, our primary norm is the sup norm x L . To appropriately compensate the growth, it is convenient to use the weighted norm. For θ , a given real-valued function, we use the notation
x L θ ( [ a , ) ) : = sup t a x ( t ) exp a t θ ( η ) d η
or x θ for short if there is no confusion about a. In that case, we also use the notation | x ( t ) | θ : = x ( t ) exp a t θ ( η ) d η for the weighted length at time t.

2.1. Fundamental Matrices

By the Picard–Lindelöf theorem, the linear system has a unique solution for | t t 0 | with = min ( a , b / M ) , where a, b, and M are such that, in the domain | t t 0 | a and | x x 0 | b , f ( t , x ) is continuous in t and is uniformly Lipschitz in y and | f ( t , x ) | is bounded by M.
For our Equation (1), we require that
  • The entries { A i j ( t ) } t R , i , j = 1 , , N are continuous at all t R .
  • K > 0 such that | A i j ( t ) | < K for all i , j = 1 , , N and all t R .
We call this the Assumption ( A 0 ) . One might have considered a smooth cut-off if necessary. By the Picard–Lindelöf theorem, the solution extends to the whole of R uniquely.
Hence, for any two numbers t and τ , it makes sense to consider the solution matrix Φ ( t , τ ) that maps x ( τ ) to x ( t ) and the whole family { Φ ( t , τ ) } t , τ R of them. In particular, the ith column of Φ ( t , τ ) is the solution with the initial condition at time τ of the ith coordinate basis. Those matrices whose columns are independent solutions are called the fundamental matrices, and thus Φ ( t , τ ) is a particular type of a fundamental matrix. For an autonomous system, the solution matrices may be written in the form Φ ( t τ ) , but, for a non-autonomous system, they explicitly depend on t and τ .
We have Φ ( t , t ) = 1 for all t and we have the following relations among solution matrices,
Φ ( a , b ) Φ ( b , c ) = Φ ( a , c ) , a , b , c .
In particular, Φ ( a , b ) is always invertible and its inverse is Φ ( b , a ) .

2.2. Operations on Block Diagonal Matrices

Let N 1 , N 2 , , N k be fixed positive integers such that 1 k N α = N . Consider a collection C of all block diagonal N × N matrices of the form U = d i a g ( B 1 , B 2 , , B k ) , with blocks, respectively, of dimensions N α × N α . C is closed under matrix multiplication. We find that, for U = d i a g ( B 1 , B 2 , , B k ) and W = d i a g ( C 1 , C 2 , , C k ) , U W = d i a g ( B 1 C 1 , B 2 C 2 , , B k C k ) .
Let P α = d i a g ( 0 , , 0 , 1 N α , 0 , , 0 ) whose only nontrivial block is at the α th site that is the N α -dimensional identity matrix. This gives the following projections. If x R N , x α refers to the N-dimensional vector P α x . If U C , U α refers to the N × N matrix P α U . It has been verified that
P α ( U W ) = ( P α U ) ( P α W ) , P α ( U x ) = ( P α U ) ( P α x )
and it follows that P α ( U 1 U 2 U j x ) = U 1 α U 2 α U j α x α .

3. Results

We first consider the asymptotic stability when the coefficient is upper triangular. This result is used as the building blocks of the asymptotic stability of a larger system with blocks that are upper triangular.
For { U ( t ) } t R , a family of N × N upper triangular matrices, we denote λ i ( t ) , i = 1 , , N , the diagonal entries of U ( t ) , which are the eigenvalues of U ( t ) . We also define
λ ¯ ( t ) max i = 1 , , N R e λ i ( t ) , and λ ̲ ( t ) min i = 1 , , N R e λ i ( t ) .
The maximum and minimum growth forward in time (backward in time then follows too) are written in terms of λ ¯ ( t ) and λ ̲ ( t ) .
Proposition 1
(stability forward in time). Assume ( A 0 ) for U ( t ) and let { Φ ( t , τ ) } t , τ R be the solution matrices with respect to the system in Equation (1) with A ( t ) = U ( t ) . Then, there is a constant C N , K > 0 such that for any a b and any vector V R N , we have
e a b λ ̲ ( η ) d η C N , K ( 1 + b a ) N 1 | V | | Φ ( b , a ) V | C N , K ( 1 + b a ) N 1 e a b λ ¯ ( η ) d η | V | ,
The constant C N , K depends only on N and K.
Proof. 
We first show the upper bound. Let y ( a ) = V and y ( b ) = Φ ( b , a ) V . We show by induction that componentwisely
| y j ( b ) | C j 1 + ( b a ) + + ( b a ) N j exp a b λ ¯ ( η ) d η max k j | y k ( a ) | , j = 1 , , N
for some constant C j > 0 that depends only on the dimensions N and K. For j = N , the last equation is y N ( t ) = λ N ( t ) y N ( t ) and so y N ( b ) = exp a b λ N ( η ) d η y N ( a ) . Thus, the statement is true for j = N with C N = 1 .
Now, suppose the statement is true for j + 1 , j + 2 , , N . Since
y j ( t ) = λ j ( t ) y j ( t ) + k > j U j , k ( t ) y k ( t ) ,
its explicit solution is that
y j ( b ) = exp a b λ j ( η ) d η y j ( a ) + k > j a b exp τ b λ j ( η ) d η U j , k ( τ ) y k ( τ ) d τ = exp a b λ ¯ ( η ) d η exp a b λ j ( η ) λ ¯ ( η ) d η y j ( a ) + k > j a b exp τ b λ j ( η ) λ ¯ ( η ) d η exp a τ λ ¯ ( η ) d η U j , k ( τ ) y k ( τ ) d τ } .
By the induction hypothesis,
| y j ( b ) | exp a b λ ¯ ( η ) d η | y j ( a ) | + k > j C k K max k > j | y k ( a ) | a b 1 + ( τ a ) + + ( τ a ) N k C j 1 + ( b a ) + + ( b a ) N j exp a b λ ¯ ( η ) d η max k j | y k ( a ) | ,
where C j is only dependent on N and the bound K. By expanding ( 1 + ( b a ) ) N 1 = j = 0 N N j ( b a ) j , we see that
1 + ( b a ) + + ( b a ) N 1 ( 1 + b a ) N 1 .
Now, we show the lower bound. To this end, we consider the inverse Φ ( a , b ) . We assert that for any real numbers a and b with a b and for any W R
| Φ ( a , b ) W | C ( 1 + b a ) N 1 exp a b λ ̲ ( η ) d η | W |
with the same C. This holds because of the previous assertion. Let y ( b ) = W and y ( a ) = Φ ( a , b ) W . We are to solve the equation
d d t y ( t ) = U ( t ) y ( t ) y ( b ) = W .
backwards in time from b to a. In a new independent variable τ = a + b t , since d d t y ( t ) = U ( t ) y ( t ) ,
d d t y ( t ) = d d τ y ( a + b τ ) = U ( a + b τ ) y ( a + b τ ) .
If we let y ^ ( τ ) = y ( a + b τ ) , U ^ ( τ ) = U ( a + b τ ) , then y ^ solves
d d τ y ^ ( τ ) = U ^ ( τ ) y ^ ( τ ) y ^ ( a ) = W .
Then, by the previous assertion,
| y ^ ( b ) | C ( 1 + b a ) N 1 | y ^ ( a ) | exp a b λ ̲ ^ ( η ) d η .
with the same N and K. However, y ^ ( b ) = y ( a ) , y ^ ( a ) = y ( b ) . The change of variable formula then gives
a b λ ̲ ^ ( η ) d η = a b λ ̲ ( b + a η ) d η = a b λ ̲ ( η ) d η .
We have shown that
| y ( a ) | C ( 1 + b a ) N 1 | y ( b ) | exp a b λ ̲ ( η ) d η .
The estimate in Equation (2) follows from the two assertions. □
The proposition states that knowledge of the eigenvalues is not as revealing as in the case of the diagonal coefficient, but is sufficient to reveal the upper and lower bound.
Now, we state the persistence theorem. We need to clarify what we aim to do in parallel to what is stated in Levinson’s Theorem. Suppose we have a block diagonal matrix with a few blocks, where the asymptotic stability of each block is known. For instances, the behavior is decided by the maximum and minimum eigenvalues of each block if the block is upper triangular. In general, the range of growth rates of each block overlap. If not, the characterization of a set of orbits by rates in such a non-overlapping range will identify the block. Our result is the first half of what would be the parallel statement of Levinson’s Theorem. The assumption here is that the blocks are divided into two groups, one group having rates slower than those in the other group. In essence, we consider a problem with two blocks with separate growth rates. We demonstrate persistence under suitable perturbations.
As shown below, for { U ( t ) } t R , a given family of upper triangular matrices, y solves the system that we call the unperturbed system
y ( t ) = U ( t ) y ( t )
and x solves the system
x ( t ) = U ( t ) x ( t ) + E ( t ) x ( t )
which is the perturbed system. All of the families of matrices described above assume ( A 0 ) . The family of solution matrices for the unperturbed problem is denoted by { Φ ( t , τ ) } t , τ R .
Theorem 1.
Suppose that U ( t ) = d i a g ( U 0 ( t ) , U 1 ( t ) ) with U 0 ( t ) and U 1 ( t ) upper triangular and of dimensions N 0 × N 0 and N 1 × N 1 , respectively. We assume the following.
1.
a 0 E ( t ) < for some a 0 .
2.
There is a real-valued function θ and constants δ > 0 and A R such that
(a) 
for any t 2 t 1 and for any λ 0 , i ( t ) i = 1 , , N 0 of eigenvalues of U 0 ( t )
t 1 t 2 R e λ 0 , i ( t ) θ ( t ) d t A ( N 0 1 + δ ) log ( 1 + t 2 t 1 ) ;
and
(b) 
for any t 2 t 1 and for any λ 1 , i ( t ) i = 1 , , N 1 of eigenvalues of U 1 ( t )
t 1 t 2 θ ( t ) R e λ 1 , i ( t ) d t A ( N 1 1 + δ ) log ( 1 + t 2 t 1 ) .
Then, there is a constant a and an N 0 -dimensional subspace E of R N such that x ( a ) E implies that lim t | x ( t ) | θ = 0 .
Proof. 
We use the notation developed in Section 2, Φ 0 ( t , τ ) = P 0 Φ ( t , τ ) , Φ 1 ( t , τ ) = P 1 Φ ( t , τ ) and Φ ( t , τ ) = Φ 0 ( t , τ ) + Φ 1 ( t , τ ) . Let
E 0 : = { P 0 V | V R N } , y ( a ) E 0 , and y ( t ) = Φ ( t , a ) y ( a ) .
We look for a solution of the following integral equation.
x ( t ) = y ( t ) + a t P 0 Φ ( t , τ ) E ( τ ) x ( τ ) d τ t P 1 Φ ( t , τ ) E ( τ ) x ( τ ) d τ .
A x ( t ) that solves Equation (4) exists following that each column of Φ ( t , τ ) solves Equation (3):
x ( t ) = y ( t ) + P 0 Φ ( t , t ) E ( t ) x ( t ) + a t P 0 d d t Φ ( t , τ ) E ( τ ) x ( τ ) d τ + P 1 Φ ( t , t ) E ( t ) x ( t ) t P 1 d d t Φ ( t , τ ) E ( τ ) x ( τ ) d τ = U ( t ) y ( t ) + E ( t ) x ( t ) + P 0 U ( t ) a t P 0 Φ ( t , τ ) E ( τ ) x ( τ ) d τ P 1 U ( t ) t P 1 Φ ( t , τ ) E ( τ ) x ( τ ) d τ = U ( t ) y ( t ) + E ( t ) x ( t ) + P 0 U ( t ) x 0 ( t ) y 0 ( t ) + P 1 U ( t ) x 1 ( t ) y 1 ( t ) = U ( t ) + E ( t ) x ( t ) .
In particular, we have P 0 x ( a ) = P 0 y ( a ) = y ( a ) .
Suppose such an x ( t ) exists. Multiplying e a t θ ( η ) d η on both sides and using block matrices calculus, we have
x ( t ) e a t θ ( η ) d η = y ( t ) e a t θ ( η ) d η + a t Φ 0 ( t , τ ) e τ t θ ( η ) d η P 0 E ( τ ) x ( τ ) e a τ θ ( η ) d η d τ t Φ 1 ( t , τ ) e t τ θ ( η ) d η P 1 E x ( τ ) e a τ θ ( η ) d η d τ = y ( t ) e a t θ ( η ) d η + I 1 + I 2 .
From Proposition 1 and our assumptions, we note that the matrix Φ 0 ( t , τ ) e τ t θ ( η ) d η is bounded for t τ . More precisely,
Φ 0 ( t , τ ) e τ t θ ( η ) d η y ( τ ) C ( 1 + t τ ) N 0 1 exp τ t λ ¯ 0 ( η ) θ ( η ) d η | y ( τ ) | C e A ( 1 + t τ ) δ | y ( τ ) | .
Similarly, the matrix Φ 1 ( t , τ ) e t τ θ ( η ) d η y ( τ ) is bounded for t τ . More precisely,
Φ 1 ( t , τ ) e t τ θ ( η ) d η y ( τ ) C ( 1 + τ t ) N 1 1 exp t τ λ ̲ 1 ( η ) + θ ( η ) d η | y ( τ ) | C e A ( 1 + τ t ) δ | y ( τ ) | .
Since E ( t ) is integrable, we can choose a so that C e A a E ( τ ) d τ < 1 2 . Then, we have
| x ( t ) y ( t ) | θ a t Φ 0 ( t , τ ) e τ t θ ( η ) d η E ( τ ) | x ( τ ) | θ d τ + t Φ 1 ( t , τ ) e τ t θ ( η ) d η E ( τ ) | x ( τ ) | θ d τ 1 2 x θ .
Let y be fixed and S y be the operator on L θ ( [ a , ) ) that maps x L θ ( [ a , ) ) to the function the right-hand-side of (5) defined. The previous estimate shows that S y x θ y θ + 1 2 x θ < and thus S y x L θ ( [ a , ) ) . The solution of the integral equation is then the fixed point of the operator. The same estimate shows that S y x S y x ¯ θ 1 2 x x ¯ θ . By the contraction mapping principle, there is a unique fixed point. It is clear that, for the fixed point, x θ 2 y θ . The map y ( a ) x ( a ) where the function x is the unique fixed point of S y and y ( t ) = Φ ( t , a ) y ( a ) , is a linear map from E 0 to its image. As P 0 x ( a ) = y ( a ) , the linear map has rank N 0 . E is then the range space.
We know that | y ( t ) | θ 0 as t . It remains to show that | x ( t ) | θ 0 as t as well. We show that | x ( t ) y ( t ) | θ 0 . Let the first integral in Equation (6) be I 1 and the second integral be I 2 . I 2 converges to 0 since
lim t t Φ 1 ( t , τ ) e τ t θ ( η ) d η E ( τ ) | x ( τ ) | θ d τ 2 C e A y θ lim t t E ( τ ) d τ = 0 .
For I 1 , we divide the integral into
a t 1 + t 1 t Φ 0 ( t , τ ) e τ t θ ( η ) d η P 0 E ( τ ) x ( τ ) e a τ θ ( η ) d η d τ .
for some t 1 t . For any given ϵ > 0 , we show that we can choose t and t 1 so large keeping the order t t 1 that I 1 ϵ . With the same reasoning used for I 2 , we can choose t 1 so large that the integral in the interval [ t 1 , t ] is smaller than ϵ 2 . In the interval [ a , t 1 ] , we write the integral in the form
Φ 0 ( t , t 1 ) e t 1 t θ ( η ) d η a t 1 Φ 0 ( t 1 , τ ) e τ t 1 θ ( η ) d η P 0 E ( τ ) x ( τ ) e a τ θ ( η ) d η d τ .
From Equation (7), Φ 0 ( t , t 1 ) e t 1 t θ ( η ) d η 0 as t , and the integral in [ a , t 1 ] above must be finite. Therefore, we can choose t so large that the above is smaller than ϵ 2 . □

4. Discussion

For an illustration of the usefulness of the generalization, we consider the following simple example:
y 1 ( t ) y 2 ( t ) y 3 ( t ) = 1 + 1 t 1 0 0 1 1 t 0 0 0 0 y 1 ( t ) y 2 ( t ) y 3 ( t ) = U ( t ) y ( t )
and
x ( t ) = U ( t ) x ( t ) + E ( t ) x ( t ) .
E ( t ) is assumed to satisfy the smallness condition. In the example above, at any finite t, the three eigenvalues of U ( t ) are all distinct and thus diagonalizable. However, in the limit, the two eigenvalues 1 + 1 t and 1 1 t become identical and thus defective.
With respect to the asymptotic behavior, we are interested in the behavior corresponding to the first 2 × 2 block. For the unperturbed problem, it is clear that, for any initial data y ( a ) = c 1 c 2 0 , y ( t ) decays exponentially to 0. We can conclude the same for the perturbed problem using our generalized theorem: We let U 0 = 1 + 1 t 1 0 1 1 t and U 1 = 0 and choose θ 1 2 . Then, the assumptions of the theorem are satisfied and thus for some a there is a two dimensional subspace E such that x ( a ) implies that | x ( t ) e t a 2 | 0 as t .

Funding

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea goverment(MSIT) (No. 2019R1F1A1058323).

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Theorem A1.
[2] (Levinson’s Theorem) Let x ( t ) R N and x ( t ) = Λ ( t ) + E ( t ) x , where Λ ( t ) is a diagonal matrix with diagonal entries λ j ( t ) , j = 1 , , N bounded and E ( t ) is a matrix such that E integrable, i.e., a 0 E ( t ) d t < for some a 0 . Fix an index k. Suppose we can find the constant A so that either of the following two membership conditions holds for every i.
i I 1 if
a 0 R e ( λ k ( s ) λ i ( s ) ) d s a s t for some a 0 ,
t 1 t 2 R e ( λ k ( s ) λ i ( s ) ) d s > A , whenever t 2 t 1 0
and i I 2 if
t 1 t 2 R e ( λ k ( s ) λ i ( s ) ) d s < A , whenever t 2 t 1 0 .
Then, there is an orbit φ k ( t ) t a for some a such that,
lim t φ k ( t ) exp a t λ k ( s ) d s = k ^ , where k ^ is the k th coordinate basis of R d .

References

  1. Levinson, N. The asymptotic nature of solutions of linear systems of differential equations. Duke Math. J. 1948, 15, 111–126. [Google Scholar] [CrossRef]
  2. Coddington, E.A.; Levinson, N. Theory of Ordinary Differential Equations; McGraw-Hill Inc.: New York, NY, USA, 1955; pp. 32–58. [Google Scholar]
  3. Hartman, P.; Wintner, A. Asymptotic integrations of linear differential equations. Am. J. Math. 1955, 77, 45–86. [Google Scholar] [CrossRef]
  4. Harris, W.A., Jr.; Lutz, D.A. A unified theory of asymptotic integration. J. Math. Anal. Appl. 1977, 57, 571–586. [Google Scholar] [CrossRef] [Green Version]
  5. Eastham, M.S.P. The Asymptotic Solution of Linear Differential Systems: Application of the Levinson Theorem; London Mathematical Society Monographs New Series Volume 4; Oxford University Press: Oxford, UK, 1989. [Google Scholar]
  6. Hsieh, P.F.; Xie, F. Asymptotic diagonalization of a linear ordinary differential system. Kumamoto J. Math. 1994, 7, 27–50. [Google Scholar]
  7. Bodine, S.; Lutz, D.A. On asymptotic equivalence of perturbed linear systems of differential and difference equations. J. Math. Anal. Appl. 2007, 326, 1174–1189. [Google Scholar] [CrossRef] [Green Version]
  8. Bodine, S.; Lutz, D.A. Asymptotic Integration of Differential and Difference Equations; Lecture Notes in Mathematics Volume 2129; Springer International Publishing: Basel, Switzerland, 2015. [Google Scholar]
  9. Markus, L.; Yamabe, H. Global stability criteria for differential systems. Osaka Math. J. 1960, 12, 305–317. [Google Scholar]
  10. Devinatz, A.; Kaplan, J.L. Asymptotic estimates for solutions of linear systems of ordinary differential equations having multiple characteristic roots. Indiana Univ. Math. J. 1972, 22, 355–366. [Google Scholar] [CrossRef]
  11. Bodine, S.; Lutz, D.A. Asymptotic integration under weak dichotomies. Rocky Mt. J. Math. 2010, 40, 51–75. [Google Scholar] [CrossRef]
  12. Dieci, L.; Eirola, T. On Smooth Decompositions of Matrices. Siam J. Matrix Anal. A 1999, 20, 800–819. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Lee, M.-G. Asymptotic Stability of Non-Autonomous Systems and a Generalization of Levinson’s Theorem. Mathematics 2019, 7, 1213. https://doi.org/10.3390/math7121213

AMA Style

Lee M-G. Asymptotic Stability of Non-Autonomous Systems and a Generalization of Levinson’s Theorem. Mathematics. 2019; 7(12):1213. https://doi.org/10.3390/math7121213

Chicago/Turabian Style

Lee, Min-Gi. 2019. "Asymptotic Stability of Non-Autonomous Systems and a Generalization of Levinson’s Theorem" Mathematics 7, no. 12: 1213. https://doi.org/10.3390/math7121213

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop