Next Article in Journal
Purely Iterative Algorithms for Newton’s Maps and General Convergence
Next Article in Special Issue
Approximation Properties of Solutions of a Mean Value-Type Functional Inequality, II
Previous Article in Journal
Unique Determination of the Shape of a Scattering Screen from a Passive Measurement
Previous Article in Special Issue
On Istrăţescu Type Contractions in b-Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regularity for Semilinear Neutral Hyperbolic Equations with Cosine Families

Department of Applied Mathematics, Pukyong National University, Busan 608-737, Korea
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(7), 1157; https://doi.org/10.3390/math8071157
Submission received: 1 June 2020 / Revised: 3 July 2020 / Accepted: 11 July 2020 / Published: 15 July 2020
(This article belongs to the Special Issue New Trends in Functional Equation)

Abstract

:
The purpose of this paper is to obtain the regularity for solutions of semilinear neutral hyperbolic equations with the nonlinear convolution. The principal operator is the infinitesimal generator of a cosine and sine families. In order to show a variation of constant formula for solutions, we make of using the nature of cosine and sine families.

1. Introduction

This paper is to establish the regularity of solutions of the following abstract semilinear neutral hyperbolic equation in a Banach space X:
d d t [ w ( t ) + g ( t , w ( t ) ) ] = A w ( t ) + F ( t , w ) + f ( t ) , 0 < t T , w ( 0 ) = x 0 , w ( 0 ) = y 0 .
The principal operator A is the infinitesimal generator of a cosine family C ( t ) ( t R ) . The nonlinear part is given by
F ( t , w ) = 0 t k ( t s ) h ( s , w ( s ) ) d s .
Here, k L 2 ( 0 , T ) and the mapping h : [ 0 , T ] × D ( A ) X satisfies that w h ( t , w ) satisfies Lipschitz continuous. The nonlinear mapping g : [ 0 , T ] × X X will be explained in detail in Section 3.
Semilinear neutral differential equations have been considered by many authors [1,2] and reference therein. We refer to [3,4] for partial neutral integro-differential equations. The existence of solutions for neutral differential equations with state-dependence delay has been studied in the literature in [5,6]. In [7,8] a hyperbolic equation of convolution type is treated. In [9,10] oscillatory properties of solutions for certain nonlinear impulsive hyperbolic partial differential equation of neutral type are investigated and new sufficient conditions and a necessary and sufficient condition for oscillation of the equations are established. The regularity of solutions of parabolic type equations under some general conditions of the nonlinear terms is considered in [11,12], which is reasonable for application in nonlinear systems. It is worth giving several examples of the use of such and other classes of differential equations in engineering and scientific tasks, for instance, almost automorphic mild solutions of hyperbolic evolution equations with stepanov-like almost automorphic forcing term have studied in [13], and the local well-posedness to the Cauchy problem of the 2D compressible Navier–Stokes–Smoluchowski equations with vacuum was considered in [14]. Recently, regularity problems of second order differential equations have been discussed in [15,16,17]. The fixed point of a locally asymptotically nonexpansive cosine family is introduced in [17,18].
In this paper, we take an approach different to that of previous works (see [2,19,20,21]) to discuss some kind of solutions of Cauchy initial problems. By means of L 2 -regularity results, we obtain global existence of semilinear neutral hyperbolic equations under more general hypotheses of nonlinear terms. Motivated by the L 2 -regularity problems of the linear cases of [19,22], we show to remain valid those under the above the semilinear neutral problem (1).
The summary of the content is as follows. In Section 2, we introduce some notations and preliminaries. In Section 3, we devote to study the regularity and existence of a solution for Equation (1). We prove that the existence for each T > 0 of a solution w L 2 ( 0 , T ; D ( A ) ) W 1 , 2 ( 0 , T ; E ) when f : R X is continuously differentiable, ( x 0 , y 0 , k ) D ( A ) × E × W 1 , 2 ( 0 , T ) . Here, the space E is an intermediate space between D ( A ) and X. We are based on some basic ideas of cosine and sine families referred to [23,24] to develop our consequence. We will make general assumptions about the nonlinear terms to obtain the regularity of solutions of Equation (1). An example as application of our results in the last section is given.

2. Preliminaries

We first introduce some notations, definitions and preliminaries. If X is a Banach space with norm denoted by · , and 1 < p < , L p ( 0 , T ; X ) is the collection of all strongly measurable functions from ( 0 , T ) into X the p-th powers whose of norms are integrable and W m , p ( 0 , T ; X ) is the set of all functions f whose derivatives D α f up to degree m in the distribution sense belong to L p ( 0 , T ; X ) .
Definition 1
([23]). A bounded linear family C ( t ) ( t R ) in X is called a cosine family if
c(1) 
C ( 0 ) = I ,    C ( s + t ) + C ( s t ) = 2 C ( s ) C ( t ) , for all s R ,
c(2) 
C ( t ) x is continuous in t o n R for each fixed x X .
Let C ( t ) ( t R ) is a cosine family in X. Then the sine family S ( t ) ( t R ) is defined by
S ( t ) x = 0 t C ( s ) x d s , x X .
Thus, S ( t ) x is also strong continuous, that is, S ( t ) x is continuous in t on R for each fixed x X .
The A : X X is defined by
A x = d 2 d t 2 C ( 0 ) x ,
which is called the infinitesimal generator of a cosine family C ( t ) ( t R ) . The domain
D ( A ) = { x X : d d t C ( t ) x is continuous }
with norm
x D ( A ) = x + sup { d d t C ( t ) x : t R } + | | A x | | .
We introduce the set
E = { x X : C ( t ) x is a once continuously differentiable function of t }
with norm
x E = x + sup { d d t C ( t ) x : t R } .
We know that D ( A ) and E with given norms are Banach spaces.
The following Lemma is a summary of Proposition 2.1 and Proposition 2.2 of [23].
Lemma 1.
Let C ( t ) ( t R ) be a cosine family in X. The following terms are satisfied:
c(3) 
there are constants ω 0 and K 1 satisfying
C ( t ) K e ω | t | for all t R , S ( t 1 ) S ( t 2 ) K t 2 t 1 e ω | s | d s for all t 1 , t 2 R ,
c(4) 
let x E . Then
S ( t ) x D ( A ) , d d t C ( t ) x = A S ( t ) x = S ( t ) A x = d 2 d t 2 S ( t ) x ,
c(5) 
let x D ( A ) . Then
C ( t ) x D ( A ) , d 2 d t 2 C ( t ) x = A C ( t ) x = C ( t ) A x ,
c(6) 
let x X and r , s R . Then
r s S ( τ ) x d τ D ( A ) and A ( r s S ( τ ) x d τ ) = C ( s ) x C ( r ) x ,
The following Lemma is from Proposition 2.4 of [23].
Lemma 2.
Let A is the infinitesimal generator of cosine family C ( t ) ( t R ) . If f : R X is continuously differentiable, ( x 0 , y 0 ) D ( A ) × E then
w ( t ) = C ( t ) x 0 + S ( t ) y 0 + 0 t S ( t s ) f ( s ) d s , t R ,
belongs to D ( A ) , w is twice continuously differentiable. Moreover, w satisfies
w ( t ) = A w ( t ) + f ( t ) , t R , w ( 0 ) = x 0 , w ( 0 ) = y 0 .
Conversely, if f : R X is continuous, w satisfies (3) and w ( t ) D ( A ) is twice continuously differentiable,
w ( t ) = C ( t ) x 0 + S ( t ) y 0 + 0 t S ( t s ) f ( s ) d s , t R .
Proposition 1.
Let f be continuously differentiable, ( x 0 , y 0 ) D ( A ) × E . Then w ( t ) defined by (4) is a solution of the linear Equation (3). Moreover, W ( t ) belongs to L 2 ( 0 , T ; D ( A ) ) W 1 , 2 ( 0 , T ; E ) and there exists a positive constant C 1 such that for any T > 0 ,
w L 2 ( 0 , T ; D ( A ) ) C 1 ( 1 + x 0 D ( A ) + y 0 E + f W 1 , 2 ( 0 , T ; X ) ) .
Proof. 
By virtue of Lemma 2, we have that w satisfies Equation (3), w ( t ) D ( A ) and w is twice continuously differentiable. It is easily seen that there is a constant C > 0 such that
w L 2 ( 0 , T ; X ) C ( x 0 D ( A ) + y 0 E + f L 2 ( 0 , T ; X ) ) .
Now, we will prove that w L 2 ( 0 , T ; D ( A ) ) . Using c(3) and c(4), it holds
0 T A C ( t ) x 0 2 d t K ( e 2 ω T 1 ) x 0 D ( A ) 2 ,
and if y 0 E , by c(4), we have
0 T A S ( t ) y 0 2 d t = 0 T d d t C ( t ) y 0 2 d t T y 0 E 2 .
It is proved in Proposition 2.4 of [23] that
A 0 t S ( t s ) f ( s ) d s = C ( t ) f ( 0 ) f ( 0 ) + 0 t ( C ( t s ) I ) f ( s ) d s .
So, since
0 T 0 t C ( t s ) f ( s ) d s 2 d t K 2 ( 1 e ω T ) 2 0 T ( 0 t f ( s ) d s ) 2 d t K 2 ( 1 e ω T ) 2 0 T t 0 t f ( s ) 2 d s d t K 2 ( 1 e ω T ) 2 T 2 2 0 T f ( s ) 2 d s ,
we have
0 T A 0 t S ( t s ) f ( s ) d s 2 d t 0 T C ( t ) f ( 0 ) 2 d t + T f ( 0 ) 2 + 0 T 0 t C ( t s ) f ( s ) d s 2 d t + 0 T 0 t f ( s ) d s 2 d t K 2 e 2 ω T T | | f ( 0 ) | | 2 + T | | f ( 0 ) | | 2 + { K 2 ( 1 e ω T ) 2 + 1 } T 2 2 0 T f ( s ) 2 d s .
Noting that from c(4)
d d t C ( t ) 0 t S ( t s ) f ( s ) d s = A S ( t ) 0 t S ( t s ) f ( s ) d s = S ( t ) A 0 t S ( t s ) f ( s ) d s ,
we can show the relation of (5) from (6)–(10). Combining (2) and c(3), we also obtain that an analogous estimate to (5) holds for w W 1 , 2 ( 0 , T ; E ) . □
Remark 1.
Let ( x 0 , y 0 ) D ( A ) × E , and let f be continuously differentiable Let us remark that if w is a solution of (3) in an interval [ 0 , t 1 + t 2 ] with t 1 , t 2 > 0 . Then when t [ 0 , t 1 + t 2 ] , from c(11)–c(14), we have
w ( t ) = C ( t t 1 ) w ( t 1 ) + S ( t t 1 ) w ( t 1 ) + t 1 t S ( t s ) f ( s ) d s = C ( t t 1 ) { C ( t 1 ) x 0 + S ( t 1 ) y 0 + 0 t 1 S ( t 1 s ) f ( s ) d s } + S ( t t 1 ) { A S ( t 1 ) x 0 + C ( t 1 ) y 0 + 0 t 1 C ( t 1 s ) f ( s ) d s } + t 1 t S ( t s ) f ( s ) d s = C ( t ) x 0 + S ( t ) y 0 + 0 t S ( t s ) f ( s ) d s ,
here, we used the following basic properties of C ( t )
S ( t ) A S ( s ) = A S ( t ) S ( s ) = 1 2 C ( t + s ) 1 2 C ( t s ) = C ( t + s ) C ( t ) C ( s )
for all s , t R . This mean the mapping t w ( t 1 + t ) is a solution of (3) in [ 0 , t 1 + t 2 ] with initial data ( w ( t 1 ) , w ( t 1 ) ) D ( A ) × E .

3. Semilinear Neutral Equations

This section is to deal with the regularity of solutions of a semilinear neutral second order initial value Equation (1) in a Banach space X: The nonlinear part of Equation (1) is given by we set
F ( t , w ) = 0 t k ( t s ) h ( s , w ( s ) ) d s
for w L 2 ( 0 , T ; D ( A ) ) and k L 2 ( 0 , T ) .
Assumption (A). Let h : [ 0 , T ] × D ( A ) X be a nonlinear operator such that
(h1) 
h ( t , w 1 ) h ( t , w 2 ) D ( A ) L w 1 w 2 ,
(h2) 
h ( t , 0 ) = 0
for a positive constant L.
Assumption (B). Let g : [ 0 , T ] × X X be a nonlinear operator such that there is a constant L g satisfying the following conditions hold:
(i)
For any x X , the mapping g ( · , x ) is strongly measurable function;
(ii)
There exists a positive constant L g such that
A g ( t , 0 ) L g , A g ( t , x ) A g ( t , x ^ ) L g x x ^ ,
for all t [ 0 , T ] , and x , x ^ X .
We will find a mild solution of Equation (1), which is represented as the integral equation
w ( t ) = C ( t ) x 0 + S ( t ) [ y 0 + g ( 0 , x 0 ) ] + 0 t S ( t s ) { F ( s , w ) + f ( s ) } d s 0 t C ( t s ) g ( s , x ( s ) ) d s .
Remark 2.
In [25], the approximate controllability of Equation (1) has investigated as a general assumption that h ( t , · ) is continuous mapping of X into itself satisfying
h ( t , w 1 ) h ( t , w 2 ) L w 1 w 2
for a positive constant L.
For short, we assume that 0 ρ ( A ) and the closed half plane { λ : R e λ 0 } ρ ( A ) . We remake that A : X X is unbounded, but we can assume that A 1 M for a positive constant M > 0 .
Lemma 3.
Let Assumption (A) be satisfied.Then for w L 2 ( 0 , T ; D ( A ) ) , T > 0 .
F ( · , w ) L 2 ( 0 , T ; X ) L k L 2 ( 0 , T ) T w L 2 ( 0 , T ; D ( A ) ) .
Moreover,
F ( · , w 1 ) F ( · , w 2 ) L 2 ( 0 , T ; X ) L k L 2 ( 0 , T ) T w 1 w 2 L 2 ( 0 , T ; D ( A ) )
for w 1 , w 2 L 2 ( 0 , T ; D ( A ) ) .
Proof. 
From Assumption (A) and by using the Hölder inequality, we have
F ( · , w ) L 2 ( 0 , T ; X ) 2 0 T 0 t k ( t s ) h ( s , w ( s ) ) d s 2 d t k L 2 ( 0 , T ) 2 0 T 0 t L 2 w ( s ) 2 d s d t L 2 k L 2 ( 0 , T ) 2 T w L 2 ( 0 , T ; D ( A ) ) 2 .
The proof of the second paragraph is obtained similarly. □
Lemma 4.
If k belongs to W 1 , 2 ( 0 , T ) , then
A 0 t S ( t s ) F ( s , w ) d s = 0 t ( C ( t s ) I ) k ( 0 ) h ( s , w ( s ) ) d s + 0 t ( C ( t s ) I ) 0 s d d s k ( s τ ) h ( τ , w ( τ ) ) d τ d s .
Proof. 
The proof of (13) is easily obained from the following formula
A 0 t S ( t s ) F ( s , w ) d s = 0 t ( C ( t s ) I ) d d s F ( s , w ) d s
and
d d s F ( s , w ) = 0 s d d s k ( s τ ) h ( τ , w ( τ ) ) d τ + k ( 0 ) h ( s , w ( s ) ) .
 □
First of all, we give the following result on a local solvability of (11).
Theorem 1.
Let that Assumptions (A) and (B) besatisfied. If ( x 0 , y 0 , k ) D ( A ) × E × W 1 , 2 ( 0 , T ) ( T > 0 ) , where k is the function defined by (11), and f : R X is continuously differentiable, then there exists a time T 0 T such that the Equation (1) guarantees a unique solution w in L 2 ( 0 , T 0 ; D ( A ) ) W 1 , 2 ( 0 , T 0 ; E ) .
Proof. 
Let us fix T 0 > 0 satisfying
C 2 ω 1 K L T 0 3 / 2 ( e ω T 0 1 ) k L 2 ( 0 , T 0 ) + { ω 1 K ( e ω T 0 1 ) + 1 } T 0 3 / 2 / 3 L | K e ω T 0 + 1 | k W 1 , 2 ( 0 , T 0 ) + { ω 1 K ( e ω T 0 1 ) + 1 } T 0 / 2 L ( K e ω T 0 + 1 ) k ( 0 ) + K M L g 2 ω e 2 ω T 0 1 + { ω 1 K ( e ω T 0 1 ) + 1 } K L g 2 ω e 2 ω T 0 1 < 1 ,
where K and L are constants in c(4) and (h1), respectively. For any v L 2 ( 0 , T 0 ; D ( A ) ) , let J be the operator on L 2 ( 0 , T 0 ; D ( A ) ) defined by
J ( v ) ( t ) = C ( t ) x 0 + S ( t ) [ y 0 + g ( 0 , x 0 ) ] + 0 t S ( t s ) { F ( s , v ) + f ( s ) } d s 0 t C ( t s ) g ( s , v ( s ) ) d s .
Then for each v 1 , v 2 L 2 ( 0 , T 0 ; D ( A ) ) ,
J ( v 1 ) ( t ) J ( v 2 ) ( t ) = 0 t S ( t s ) { F ( s , v 1 ) F ( s , v 2 ) } d s 0 t C ( t s ) [ g ( s , v 1 ( s ) ) g ( s , v 2 ( s ) ) ] d s = I 1 ( v 1 v 2 ) I 2 ( v 1 v 2 ) ,
where
I 1 = 0 t S ( t s ) { F ( s , v 1 ) F ( s , v 2 ) } d s , I 2 = 0 t C ( t s ) [ g ( s , v 1 ( s ) ) g ( s , v 2 ( s ) ) ] d s .
Now, we will show that I i L 2 ( 0 , T 0 ; D ( A ) ) ( i = 1 , 2 ) . From Lemmas 3 and 4, it follows that for 0 t T 0 ,
I 1 = 0 t S ( t s ) { F ( s , v 1 ) F ( s , v 2 ) } d s ω 1 K L T 0 ( e ω T 0 1 ) k L 2 ( 0 , T 0 ) v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) ,
and
A I 1 = A 0 t S ( t s ) { F ( s , v 1 ) F ( s , v 2 ) } d s 0 t ( C ( t s ) I ) 0 s d d s k ( s τ ) h ( τ , v 1 ( τ ) ) h ( τ , v 2 ( τ ) ) d τ d s + 0 t ( C ( t s ) I ) k ( 0 ) ( h ( s , v 1 ( s ) ) h ( s , v 2 ( s ) ) ) d s t L ( K e ω t + 1 ) k W 1 , 2 ( 0 , T 0 ) v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) + t L K e ω t + 1 k ( 0 ) v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) .
We also obtain that
d d t C ( t ) I 1 = d d t C ( t ) 0 t S ( t s ) { F ( s , v 1 ) F ( s , v 2 ) } d s = A S ( t ) 0 t S ( t s ) { F ( s , v 1 ) F ( s , v 2 ) } d s = S ( t ) A 0 t S ( t s ) { F ( s , v 1 ) F ( s , v 2 ) } d s .
Therefore we have
I 1 L 2 ( 0 , T 0 ; D ( A ) ) ω 1 K L T 0 3 / 2 ( e ω T 0 1 ) k L 2 ( 0 , T 0 ) v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) + { ω 1 K ( e ω T 0 1 ) + 1 } T 0 3 / 2 / 3 L ( K e ω T 0 + 1 ) k W 1 , 2 ( 0 , T 0 ) v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) + { ω 1 K ( e ω T 0 1 ) + 1 } T 0 / 2 L ( K e ω T 0 + 1 ) k ( 0 ) v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) )
From now on, we show that I 2 L 2 ( 0 , T 0 ; D ( A ) ) , from Assumption(B), (c3), it follows that
I 2 = i n t 0 t C ( t s ) { g ( s , v 1 ) g ( s , v 2 ) } d s 0 t C ( t s ) { g ( s , v 1 ) g ( s , v 2 ) } d s K 2 ω e 2 ω t 1 [ 0 t A 1 A { g ( s , v 1 ) g ( s , v 2 ) } 2 d s ] 1 2 K M L g 2 ω e 2 ω t 1 [ 0 t v 1 v 2 2 d s ] 1 2 K M L g 2 ω e 2 ω t 1 v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) ,
and
A I 2 = A 0 t C ( t s ) { g ( s , v 1 ) g ( s , v 2 ) } d s 0 t C ( t s ) A { g ( s , v 1 ) g ( s , v 2 ) } d s [ 0 t C ( t s ) 2 d s ] 1 2 [ 0 t A { g ( s , v 1 ) g ( s , v 2 ) } 2 d s ] 1 2 K L g 2 ω e 2 ω t 1 v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) ,
We also obtain that
d d t C ( t ) 0 t C ( t s ) g ( s , w ( s ) ) d s = S ( t ) A 0 t C ( t s ) g ( s , w ( s ) ) d s ,
Hence by (19)–(21), we conclude that
I 2 L 2 ( 0 , T 0 ; D ( A ) ) K M L g 2 ω e 2 ω T 0 1 v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) + { ω 1 K ( e ω T 0 1 ) + 1 } K L g 2 ω e 2 ω T 0 1 v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) )
Thus, from I 1 and I 2 , we conclude that
J ( v 1 ) J ( v 2 ) L 2 ( 0 , T 0 ; D ( A ) ) I 1 L 2 ( 0 , T 0 ; D ( A ) ) + I 2 L 2 ( 0 , T 0 ; D ( A ) ) ω 1 K L T 0 3 / 2 ( e ω T 0 1 ) k L 2 ( 0 , T 0 ) v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) + { ω 1 K ( e ω T 0 1 ) + 1 } T 0 3 / 2 / 3 L ( K e ω T 0 + 1 ) k W 1 , 2 ( 0 , T 0 ) v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) + { ω 1 K ( e ω T 0 1 ) + 1 } T 0 / 2 L ( K e ω T 0 + 1 ) k ( 0 ) v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) + K M L g 2 ω e 2 ω T 0 1 v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) + { ω 1 K ( e ω T 0 1 ) + 1 } K L g 2 ω e 2 ω T 0 1 v 1 v 2 L 2 ( 0 , T 0 ; D ( A ) ) .
So in terms of the condition (14), the contraction mapping principle on J defined by (15) the contraction mapping guarantees that the solution of Equation (1) exists uniquely in [ 0 , T 0 ] . □
Now, we establish the global existence of solution of Equation (1) based on a variation of constant formula for solutions
Theorem 2.
Let Assumptions (A) and (B) be satisfied. If f : R X is continuously differentiable, ( x 0 , y 0 , k ) D ( A ) × E × W 1 , 2 ( 0 , T ) , then the solution w of Equation (1) exists and is unique in L 2 ( 0 , T ; D ( A ) ) W 1 , 2 ( 0 , T ; E ) for each T > 0 , and there is a constant C 3 depending on T such that
w L 2 ( 0 , T ; D ( A ) ) C 3 ( 1 + x 0 D ( A ) + y 0 E + f W 1 , 2 ( 0 , T ; X ) ) .
Proof. 
Let w ( · ) be the solution of Equation (1) in [ 0 , T 0 ] where T 0 is a constant in (14) and let v ( · ) be a solution of the following linear equation
v ( t ) = A v ( t ) + f ( t ) , 0 < t , v ( 0 ) = x 0 , v ( 0 ) = y 0 .
Then
( w v ) ( t ) = S ( t ) g ( 0 , x 0 ) + 0 t S ( t s ) F ( s , v ) d s 0 t C ( t s ) g ( s , v ) d s ,
and by (22)
v w L 2 ( 0 , T 0 ; D ( A ) ) C 2 v L 2 ( 0 , T 0 ; D ( A ) ) + ω 1 K M ( e ω T 0 1 ) x 0 L 2 ( 0 , T 0 ; D ( A ) ) ,
where C 2 is the constant defined by (14). Combining (24) with Proposition 1, it holds
v L 2 ( 0 , T 0 ; D ( A ) ) 1 1 C 2 v L 2 ( 0 , T 0 ; D ( A ) ) + 1 1 C 2 ω 1 K M ( e ω T 0 1 ) x 0 L 2 ( 0 , T 0 ; D ( A ) ) C 1 1 C 2 ( 1 + x 0 D ( A ) + y 0 E + f W 1 , 2 ( 0 , T 0 ; X ) ) + 1 1 C 2 ω 1 K M ( e ω T 0 1 ) x 0 L 2 ( 0 , T 0 ; D ( A ) ) .
In order to obtain the solution of Equation (1) on [ T 0 , 2 T 0 ] , we will show that w ( T 0 ) D ( A ) , w ( T 0 ) E . Since
A I 1 ( w ) = A 0 T 0 S ( T 0 s ) { F ( s , w ) + f ( s ) } d s = C ( T 0 ) f ( 0 ) f ( 0 ) + 0 T 0 ( C ( T 0 s ) I ) f ( s ) d s + 0 T 0 ( C ( T 0 s ) I ) 0 s d d s k ( s τ ) h ( τ , w ( τ ) ) d τ d s + 0 T 0 ( C ( T 0 s ) I ) k ( 0 ) h ( s , w ( s ) ) d s
and
d d t C ( t ) 0 t S ( t s ) { F ( s , v ) + f ( s ) } d s = S ( t ) A 0 t S ( t s ) { F ( s , v ) + f ( s ) } d s ,
it holds
I 1 ( w ) D ( A ) L k L 2 ( 0 , T 0 ) w L 2 ( 0 , T 0 ; D ( A ) ) + ( ω 1 K ( e ω T 0 1 ) + 1 ) T 0 L ( K e ω T 0 + 1 ) k W 1 , 2 ( 0 , T 0 ) w L 2 ( 0 , T 0 ; D ( A ) ) + ( ω 1 K ( e ω T 0 1 ) + 1 ) T 0 L ( K e ω T 0 + 1 ) k ( 0 ) w L 2 ( 0 , T 0 ; D ( A ) ) ,
Furthermore, we obtain
I 2 ( w ) = 0 T 0 C ( T 0 s ) g ( s , w ( s ) ) d s 0 T 0 C ( T 0 s ) g ( s , w ( s ) ) d s K 2 ω e 2 ω T 0 1 [ 0 T 0 A 1 A g ( s , w ( s ) ) 2 d s ] 1 2 K M L g 2 ω e 2 ω T 0 1 [ 0 T 0 w 2 d s ] 1 2 K M L g 2 ω e 2 ω T 0 1 w L 2 ( 0 , T 0 ; D ( A ) ) ,
and
A I 2 ( w ) = A 0 T 0 C ( T 0 s ) g ( s , w ( s ) ) d s 0 T 0 C ( T 0 s ) A g ( s , w ( s ) ) d s K L g 2 ω e 2 ω T 0 1 w L 2 ( 0 , T 0 ; D ( A ) ) ,
Since
d d t C ( t ) 0 T 0 C ( T 0 s ) g ( s , w ( s ) ) d s = S ( t ) A 0 T 0 C ( T 0 s ) g ( s , w ( s ) ) d s ,
We obtain that
I 2 ( w ) D ( A ) K M L g 2 ω e 2 ω T 0 1 w L 2 ( 0 , T 0 ; D ( A ) + ( ω 1 K ( e ω T 0 1 ) + 1 ) K L g 2 ω e 2 ω T 0 1 w L 2 ( 0 , T 0 ; D ( A ) ) ,
Therefore, we conclude that
w ( T 0 ) D ( A ) = C ( T 0 ) x 0 + S ( T 0 ) ( y 0 + g ( 0 , x 0 ) ) + I 1 ( w ) I 2 ( w ) 0 T 0 C ( T 0 s ) g ( s , w ( s ) ) d s | D ( A ) ( ω 1 K ( e ω T 0 1 ) + 1 ) [ K e ω T 0 + L g { ω 1 K e ω T 0 1 ) + M } x 0 D ( A ) + ω 1 K ( e ω T 0 1 ) y 0 E + K e ω T 0 f ( 0 ) + f ( 0 ) + K ( e ω T 0 + 1 ) T 0 f W 1 , 2 ( 0 , T ; X ) + L k L 2 ( 0 , T 0 ) w L 2 ( 0 , T 0 ; D ( A ) ) + T 0 L ( K e ω T 0 + 1 ) k W 1 , 2 ( 0 , T 0 ) w L 2 ( 0 , T 0 ; D ( A ) ) + T 0 L ( K e ω T 0 + 1 ) k ( 0 ) w L 2 ( 0 , T 0 ; D ( A ) ) + K M L g 2 ω e 2 ω T 0 1 w L 2 ( 0 , T 0 ; D ( A ) + K L g 2 ω e 2 ω T 0 1 w L 2 ( 0 , T 0 ; D ( A ) ) ] .
Thus, by (25), there is a positive constant C > 0 such that
w ( T 0 ) D ( A ) C ( 1 + x 0 D ( A ) + y 0 E + f W 1 , 2 ( 0 , T 0 ; X ) ) .
From which it is immediately obtain that w ( T 0 ) E . Hence, we can show that the solution can be extended to the interval [ T 0 , 2 T 0 ] with the initial ( w ( n T 0 ) , w ( n T 0 ) ) D ( A ) × E and that an estimate similar to (25) holds. Since the condition (14) are independent of initial values, the solution can be extended to the interval [ 0 , n T 0 ] for every natural number n. So the proof is complete. □
Example. We consider the following semilinear neutral partial differential equation in X = L 2 ( [ 0 , π ] ; R ) :
d d t [ w ( t , x ) + g ( t , w ( t , x ) ) ] ) = A w ( t , x ) + F ( t , w ) + f ( t ) , 0 < t , 0 < x < π , w ( t , 0 ) = w ( t , π ) = 0 , t R w ( 0 , x ) = x 0 ( x ) , w ( 0 , x ) = y 0 ( x ) , 0 < x < π
It is well known that { e n = 2 π sin n x : n = 1 , } is an orthonormal base for X. Let A : X X be defined by
A w ( t , x ) = 2 x 2 w ( t , x ) .
For short, w ( t , x ) w ( x ) , we know D ( A ) = { w W 2 , 2 ( 0 , π ) : ( w , 0 ) = w ( π ) = 0 } , and
A w = n = 1 n 2 ( w , e n ) e n , w D ( A ) ,
and A is the infinitesimal generator of a cosine family C ( t ) ( t R ) in X represented as
C ( t ) w = n = 1 cos n t ( w , e n ) e n , w X .
The associated sine family is given by
S ( t ) w = n = 1 sin n t n ( w , e n ) e n , w X .
Since { e n : n N } is an orthogonal basis of X and
e A t = n = 1 e n 2 t ( w , e n ) e n , w H , t > 0 .
Moreover, there exists a constant M 0 such that e A t M 0 .
Define g : [ 0 , T ] × X X as
g ( t , w ) = n = 1 0 t e n 2 t ( 0 s a 2 ( t s ) w ( s ) d s , w n ) w n ,
where there exists a constant M 1 such that
| a 2 ( s ) | M 1 , | a 2 ( s ) a 2 ( τ ) | M 1 ( s τ ) , s , τ R + .
Then it can be checked that Assumption (B) in Section 3 is satisfied. Indeed, for w X , we know
A g ( t , w ) = ( e A t I ) 0 s a 2 ( t s ) w ( s ) d s ,
where I is the identity operator form X to itself. Hence, we have
A g ( t , w ) 2 ( M 0 + 1 ) 2 { | 0 s ( a 2 ( t + s ) a 2 ( t ) ) w ( s ) d s | 2 + | 0 s a 2 ( t ) w ( s ) d τ | 2 } ( M 0 + 1 ) 2 M 1 2 t 3 / 3 + t w .
It is immediately seen that Assumption (B) has been satisfied.Let
h ( t , w ) x = h 1 ( t , w , D w , D 2 w ) .
We consider Equation (26) under the following assumptions:
Assumption (C). There is a continuous γ ( t , r ) : R 2 R + such that
(h1)
h 1 ( t , x , 0 , 0 ) = 0 ,
(h2)
| h 1 ( t , x , w , p ) h 1 ( t , x , w , q ) | γ ( t , | w | ) | p q | ,
(h3)
| h 1 ( t , x , w 1 , p ) h 1 ( t , x , w 2 , p ) | γ ( t , | w 1 | + | w 2 | ) | w 1 w 2 | .
Then since
h ( t , w 1 ) h ( t , w 2 ) 0 , 2 2 2 Ω | h 1 ( t , x , w 1 , D w 1 , D 2 w 1 ) h 1 ( t , w 2 , D w 2 , D 2 w 2 ) | 2 d u + 2 Ω | h 1 ( t , x , w 1 , D w 1 , D 2 w 1 ) h 1 ( t , x , w 2 , D w 2 , D 2 w 2 ) | 2 d u ,
it follows from Assumption (C) that
h ( t , w 1 ) g ( t , w 2 ) 0 , 2 2 L ( w 1 D ( A ) , w 2 D ( A ) ) w 1 w 2 D ( A ) ,
where L ( w 1 D ( A ) , w 2 D ( A ) ) is a constant depending on w 1 D ( A ) and w 2 D ( A ) . We set
F ( t , w ) = 0 t k ( t s ) h ( s , w ( s ) ) d s
where k belongs to L 2 ( 0 , T ) .
Theorem 3.
Let Assumption (C) be satisfied for the Equation (26), If f : R X is continuously differentiable, ( x 0 , y 0 , k ) D ( A ) × E × W 1 , 2 ( 0 , T ) , then the solution w of Equation (26) exists and is unique in L 2 ( 0 , T ; D ( A ) ) W 1 , 2 ( 0 , T ; E ) for each T > 0 , and there is a constant C 3 depending on T such that
w L 2 ( 0 , T ; D ( A ) ) C 3 ( 1 + x 0 D ( A ) + y 0 E + f W 1 , 2 ( 0 , T ; X ) ) .

4. Conclusions

This paper investigates the regularity for solutions of semilinear neutral hyperbolic equations with the nonlinear convolution. The principal operator is the infinitesimal generator of a cosine and sine families. We can obtain a variation of constant formula for solutions under more general hypotheses of nonlinear terms by using the nature of cosine and sine families. To show the regularity results of semilinear neutral hyperbolic equations is to remain valid those under the above the semilinear neutral problem based by the L 2 -regularity problems of the linear cases, which is also applicable to the functional analysis concerning control problems and optimal control theory.

Author Contributions

All authors have made equal contributions. All authors read and approved the final manuscript.

Funding

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (2019R1F1A1048077).

Acknowledgments

Authors are thankful to the anonymous referee for useful comments and suggestions, which really helped us to improve our old manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fu, X.; Ezzinbi, K. Existence of solutions for neutral functional differential evolution equations with nonlocal conditions. Nonlinear Anal. 2003, 54, 215–227. [Google Scholar] [CrossRef]
  2. Li, Q.; Zhang, H. Existence and regularity of periodic solutions for neutral evolution equations with delays. Adv. Differ. Equ. 2019, 2019, 330. [Google Scholar] [CrossRef] [Green Version]
  3. Dos Santos, J.P.C. Existence results for a partial neutral integro-differential equation with state-dependent delay. Electron. J. Qual. Theory Differ. Equ. 2010, 29, 1–12. [Google Scholar] [CrossRef]
  4. Dos Santos, J.P.C. On state-dependent delay partial neutral integro-differential equations. Appl. Math. Comput. 2010, 216, 1637–1644. [Google Scholar] [CrossRef]
  5. Hernández, E.; Mckibben, M. On state-dependent delay partial neutral functional differential equations. Appl. Math. Comput. 2007, 186, 294–301. [Google Scholar] [CrossRef]
  6. Hernández, E.; Mckibben, M.; Henrŕquez, H. Existence results for partial neutral functional differential equations with state-dependent delay. Math. Comput. Model. 2009, 49, 1260–1267. [Google Scholar] [CrossRef]
  7. Maccamy, R.C. A model for one-dimensional, nonlinear viscoelasticity. Q. Appl. Math. 1977, 35, 21–33. [Google Scholar] [CrossRef] [Green Version]
  8. Maccamy, R.C. An integro-differential equation with applications in heat flow. Q. Appl. Math. 1977, 35, 1–19. [Google Scholar] [CrossRef] [Green Version]
  9. Yang, J.; Liu, A.; Liu, G. Oscillation of solutions to neutral nonlinear impulsive hyperbolic equations with several delays. Electron. J. Differ. Equ. 2013, 27, 1–10. [Google Scholar]
  10. Liu, A. Oscillation criteria of neutral type impulsive hyperbolic equations. Acta Math. Sci. Ser. B 2014, 34, 1845–1853. [Google Scholar]
  11. Jeong, J.M.; Kwun, Y.C.; Park, J.Y. Regularity for solutions of nonlinear evolution equations with nonlinear perturbations. Comput. Math. Appl. 2002, 43, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
  12. Jeong, J.M.; Kwun, Y.C.; Park, J.Y. Approximate controllability for semilinear retarded functional differential equations. J. Dyn. Control Syst. 1988, 15, 329–346. [Google Scholar]
  13. Mishra, I.; Bahuguna, D. Almost automorphic mild solutions of hyperbolic evolution equations with Stepanov-like almost automorphic forcing term. Electron. J. Differ. Equ. 2012, 2012, 1–11. [Google Scholar]
  14. Liu, Y. Local well-posedness to the Cauchy problem of the 2D compressible Navier-Stokes-Smoluchowski equations with vacuum. J. Math. Anal. Appl. 2020, 489, 124154. [Google Scholar] [CrossRef]
  15. Buzano, E.; Oliaro, A. Global regularity of second order twisted differential operators. J. Differ. Equ. 2020, 268, 7364–7416. [Google Scholar] [CrossRef] [Green Version]
  16. Ospanov, K. Maximal Lp-regularity for a second-order with unbounded intermediate coefficient. Electron. J. Qual. Theory Differ. Equ. 2019, 65, 1–13. [Google Scholar] [CrossRef]
  17. Bera, M.; Prasad, M.G. Fixed points and dynamics of two-parameter family of hyperbolic cosine like functions. J. Math. Anal. Appl. 2019, 469, 1070–1079. [Google Scholar] [CrossRef]
  18. Xiao, J.; Zhu, Y. Approximation of common fixed points of asymptotically nonexpansive cosine family based on modified Ishikawa iterations. Comput. Appl. Math. 2019, 38, 151. [Google Scholar] [CrossRef]
  19. Ikawa, M. Mixed problems for hyperbolic equations of second order. J. Math. Soc. Jpn. 1968, 20, 580–608. [Google Scholar] [CrossRef]
  20. Webb, G. Continuous nonlinear perturbations of linear accretive operators in Banach spaces. J. Funct. Anal. 1972, 10, 191–203. [Google Scholar] [CrossRef] [Green Version]
  21. Heard, M.L. An abstract parabolic Volterra integro-differential equation. J. Appl. Math. 1981, 17, 175–202. [Google Scholar]
  22. Barbu, V. Analysis and Control of Nonlinear Infinite Dimensional Systems; Academic Press Limited: Cambridge, MA, USA, 1993. [Google Scholar]
  23. Travis, C.C.; Webb, G.F. Cosine families and abstract nonlinear second order differential equations. Acta Math. Hung. 1978, 32, 75–96. [Google Scholar] [CrossRef]
  24. Travis, C.C.; Webb, G.F. An abstract second order semilinear Volterra integrodifferential equation. SIAM J. Math. Anal. 1979, 10, 412–423. [Google Scholar] [CrossRef]
  25. Park, J.Y.; Han, H.K.; Kwun, Y.C. Approximate controllability of second order integrodifferntial systems. Indian J. Pure Appl. Math. 1998, 29, 941–950. [Google Scholar]

Share and Cite

MDPI and ACS Style

Cho, S.-H.; Jeong, J.-M. Regularity for Semilinear Neutral Hyperbolic Equations with Cosine Families. Mathematics 2020, 8, 1157. https://doi.org/10.3390/math8071157

AMA Style

Cho S-H, Jeong J-M. Regularity for Semilinear Neutral Hyperbolic Equations with Cosine Families. Mathematics. 2020; 8(7):1157. https://doi.org/10.3390/math8071157

Chicago/Turabian Style

Cho, Seong-Ho, and Jin-Mun Jeong. 2020. "Regularity for Semilinear Neutral Hyperbolic Equations with Cosine Families" Mathematics 8, no. 7: 1157. https://doi.org/10.3390/math8071157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop