Next Article in Journal
Mathematical Data Science with Applications in Business, Industry, and Medicine
Previous Article in Journal
Image Noise Reduction and Solution of Unconstrained Minimization Problems via New Conjugate Gradient Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Existence Results for Nonlinear Impulsive System with Causal Operators

College of Mathematics and Information Sciences, Hebei University, Baoding 071002, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2024, 12(17), 2755; https://doi.org/10.3390/math12172755
Submission received: 4 August 2024 / Revised: 2 September 2024 / Accepted: 4 September 2024 / Published: 5 September 2024

Abstract

:
In this paper, we establish sufficient conditions for some existence results for nonlinear impulsive differential equations involving causal operators. Our method is based on the monotone iterative technique, a new differential inequality, and the Schauder fixed point theorem. Moreover, we consider three impulsive differential equations as applications to verify our theoretical results.

1. Introduction

Impulsive systems turned out to be the most effective tools for describing many evolutionary progresses that experience instantaneous changes of state at certain moments. Pioneering studies on impulsive differential equations and their dynamics are detailed in [1], establishing a fundamental theoretical framework for impulsive systems. Additionally, a comprehensive analysis of the system properties within impulsive systems is provided in the monographs [2]. Beyond the theory [3,4,5,6,7,8,9], impulsive systems are widely used in biological systems, control systems, ecological systems, and neural network systems [10,11,12]. As a result of these successful applications in various fields, impulsive differential equations have attracted considerable attention.
The theory of differential equations with causal operators is experiencing an important development because it is a framework richer than the corresponding theory of ordinary differential equations. A causal operator is a non-anticipative operator and has been adopted from the engineering literature; some results have been introduced in the monograph [13]. Recently, various types of causal differential equations have been studied widely, such as ordinal differential equations [14,15], functional differential equations [16], differential equations in Banach spaces [17], difference equations [18], and integral differential equations [19]. In addition, Jabeen et al. [20] investigated impulsive differential equations with causal operators and gave the existence of an optimal solution for the control problem. Inspired by the above results, the aim of this paper is to study the impulsive differential equations further enriched by the causal operators, while subject to nonlinear periodic boundary conditions. Therefore, we discuss the following impulsive differential equations involving a causal operator with nonlinear periodic boundary conditions:
x ( t ) = ( R x ) ( t ) , t t k , t J , Δ x ( t k ) = I k ( x ( t k ) ) , k = 1 , 2 , , m , h ( x ( 0 ) , x ( T ) ) = 0 ,
where J = [ 0 , T ] ( T > 0 ) , E is a real separable Banach space of continuous functions from [ 0 , T ] to the set of real numbers R , R C ( E , E ) is a causal operator, h C ( R × R , R ) , 0 = t 0 < t 1 < t 2 < < t m < t m + 1 = T , I k C ( R , R ) , and Δ x ( t k ) = x ( t k + ) x ( t k ) , where x ( t k + ) and x ( t k ) denote the right and left limits of x ( t ) at t = t k ( k = 1 , 2 , , m ) .
One point of interest of our study lies in the fact that the periodic boundary conditions are nonlinear and encompass the usual linear boundary conditions (such as initial, periodic, and anti-periodic) and other general conditions, such as e x ( 0 ) x ( T ) = 0 and y ( 0 ) 0 T y ( t ) d t = C (C is a constant). In addition, note that the impulsive differential equations (1) reduce to ordinary differential equations for I k 0 , which has been studied in [14], initial boundary problems for x ( 0 ) = ξ , which has been studied in [20], and other differential equations, such as ( R x ) ( t ) = f ( t , x ( t ) , ( T x ) ( t ) , ( S x ) ( t ) ) , which has been studied in [21]. Thus, our work includes more types of differential equations.
In this paper, we extend the notion of causal operators to the nonlinear periodic boundary value problems for impulsive differential equations. The rest of this paper is organized as follows. Section 2 establishes a new differential inequality. Section 3 presents the existence of extremal solutions, following the definition of upper and lower solutions. Section 4 gives the existence of extremal quasi-solutions after the definition of coupled lower and upper solutions. In Section 5, a weakly coupled extremal quasi-solution is studied, which is dependent on the introduction of weakly coupled lower and upper solutions. In addition, examples are added in each section to verify the theoretical results.

2. Preliminaries

Let C ( R , R ) denote the set of real valued continuous functions and J 0 = J { t 1 , t 2 , , t m } . Let us introduce the space:
P C ( J , R ) = x : J R ; x ( t ) is continuous everywhere except for some t k at which x ( t k ) and x ( t k + ) exist and x ( t k ) = x ( t k ) , k = 1 , 2 , , m
P C 1 ( J , R ) = x P C ( J , R ) ; x is continuously differentiable for any t J 0 , where x ( 0 + ) , x ( T ) , x ( t k + ) and x ( t k ) exist , k = 1 , 2 , , p .
Clearly, P C ( J , R ) , P C 1 ( J , R ) are Banach spaces with the following respective norms:
x P C ( J , R ) = sup t J | x ( t ) | , x P C 1 ( J , R ) = x P C ( J , R ) + x P C ( J , R ) .
Let Ω = P C 1 ( J , R ) . A function x Ω is called a solution of (1) if it satisfies (1).
Definition 1.
An operator R C ( E , E ) is a causal operator if for all ζ , ξ E , with ζ ( t ) = ξ ( t ) , such that for every t [ 0 , T ] , we have ( R ζ ) ( t ) = ( R ξ ) ( t ) .
Lemma 1.
Suppose that p Ω satisfies
p ( t ) M ( t ) p ( t ) ( L p ) ( t ) , t t k , t J , Δ p ( t k ) L k p ( t k ) , k = 1 , 2 , , m , p ( 0 ) λ p ( T ) ,
where M C ( J , [ 0 , + ) ) , 0 L k < 1 , k = 1 , 2 , , m , and L C ( E , E ) is a positive linear operator, i.e., L z 0 wherever z 0 , satisfies
0 T [ M ( t ) + ( L 1 ) ( t ) ] d t + k = 1 m L k λ , w h e r e 1 ( t ) = 1 , 0 < λ 1 .
Then p ( t ) 0 for t J .
Proof. 
If the assumption p ( t ) 0 , t J does not hold, two cases arise:
Case 1: There exists t ¯ J such that p ( t ¯ ) > 0 and p ( t ) 0 for all t J .
Then, from (2), we have p ( t ) 0 for t t k and Δ p ( t k ) 0 ( k = 1 , 2 , , m ) , hence p ( t ) is nonincreasing on J. If λ = 1 , p ( 0 ) p ( T ) shows that p ( t ) c (c is a constant), then p ( t ) = 0 . Notice that p ( t ¯ ) > 0 , we have 0 p ( t ¯ ) M p ( t ¯ ) ( L p ) ( t ¯ ) < 0 , which is a contradiction. For 0 < λ < 1 , it follows that p ( T ) p ( 0 ) λ p ( T ) , presenting another contradiction.
Case 2: There exist t * and t * , such that p ( t * ) < 0 and p ( t * ) > 0 .
Given inf t J p ( t ) = r , r > 0 , and for a specific i { 1 , 2 , , m } , there is a t * ( t i , t i + 1 ] , such that p ( t * ) = r or p ( t * + ) = r . We only focus on the case where p ( t * ) = r , as the proof for the case p ( t * + ) = r is analogous.
Subcase (1): If t * < t * , from (2), we have
p ( t * ) p ( t * ) = t * t * p ( t ) d t + t * < t k < t * Δ p ( t k ) t * t * [ M ( t ) p ( t ) + ( L p ) ( t ) ] d t t * < t k < t * L k p ( t k ) r 0 T [ M ( t ) + ( L 1 ) ( t ) ] d t + k = 1 m L k ,
and hence
0 < p ( t * ) r + r 0 T [ M ( t ) + ( L 1 ) ( t ) ] d t + k = 1 m L k = r 0 T [ M ( t ) + ( L 1 ) ( t ) ] d t + k = 1 m L k 1 ,
which yields
0 T [ M ( t ) + ( L 1 ) ( t ) ] d t + k = 1 m L k > 1 ,
which contradicts (3).
Subcase (2): If t * > t * , from (2), we get
p ( T ) = p ( t * ) + t * T p ( t ) d t + t * < t k < T Δ p ( t k ) p ( t * ) t * T [ M ( t ) p ( t ) + ( L p ) ] d t t * < t k < T L k p ( t k ) p ( t * ) + r t * T [ M ( t ) + ( L 1 ) ( t ) ] d t + r t * < t k < T L k ,
and
p ( t * ) = p ( 0 ) + 0 t * p ( t ) d t + 0 < t k < t * Δ p ( t k ) p ( 0 ) 0 t * [ M ( t ) p ( t ) + ( L p ) ] d t 0 < t k < t * L k p ( t k ) p ( 0 ) + r 0 t * [ M ( t ) + ( L 1 ) ( t ) ] d t + r 0 < t k < t * L k .
Using p ( 0 ) λ p ( T ) , 0 < λ 1 , we get
0 < p ( t * ) λ p ( t * ) + r t * T [ M ( t ) + ( L 1 ) ( t ) ] d t + t * < t k < T L k + r 0 t * [ M ( t ) + ( L 1 ) ( t ) ] d t + 0 < t k < t * L k r λ + r 0 T [ M ( t ) + ( L 1 ) ( t ) ] d t + k = 1 m L k = r 0 T [ M ( t ) + ( L 1 ) ( t ) ] d t + k = 1 m L k λ ,
which is a contradiction. Then, we get p ( t ) 0 , t J . The proof is complete. □
Next, we give the following linear problems and lemmas, which help to validate our main results.
x ( t ) = M ( t ) x ( t ) ( L x ) ( t ) + σ η ( t ) , t t k , t J , Δ x ( t k ) = L k x ( t k ) + I k ( η ( t k ) ) + L k η ( t k ) , k = 1 , 2 , , m , h ( η ( 0 ) , η ( T ) ) + M 1 ( x ( 0 ) η ( 0 ) ) M 2 ( x ( t ) η ( T ) ) = 0 ,
where M C ( J , [ 0 , + ) ) , 0 L k < 1 , k = 1 , 2 , , m , η Ω , σ η ( t ) = ( R η ) ( t ) + M ( t ) η ( t ) + ( L η ) ( t ) .
Lemma 2.
A function x Ω is a solution to (4) if and only if x E satisfies the following integral equation:
x ( t ) = A η e 0 t M ( r ) d r M 1 M 2 e 0 T M ( r ) d r + 0 T G ( t , s ) [ σ η ( s ) ( L x ) ( s ) ] d s + 0 < t k < T G ( t , t k ) [ L k x ( t k ) + I k ( η ( t k ) ) + L k η ( t k ) ] ,
where A η = h ( η ( 0 ) , η ( T ) ) + M 1 η ( 0 ) M 2 η ( T ) , M C ( J , [ 0 , + ) ) , M 1 , M 2 are constants with M 1 M 2 e 0 T M ( r ) d r and
G ( t , s ) = M 2 M 1 M 2 e 0 T M ( r ) d r e 0 t M ( r ) d r e s T M ( r ) d r + e s t M ( r ) d r , 0 s < t T , M 2 M 1 M 2 e 0 T M ( r ) d r e 0 t M ( r ) d r e s T M ( r ) d r , 0 t s T .
Proof. 
Assume x E is a solution to (4); we have
x ( t ) + M ( t ) x ( t ) = ( L x ) ( t ) + σ η ( t ) , t t k , t J ,
Δ x ( t k ) = L k x ( t k ) + I k ( η ( t k ) ) + L k η ( t k ) , k = 1 , 2 , , m .
It is easy to obtain the following formula:
x ( t ) = x ( 0 ) e 0 t M ( r ) d r + 0 t e t s M ( r ) d r [ σ η ( s ) ( L x ) ( s ) ] d s + 0 < t k < t e t t k M ( r ) d r [ L k x ( t k ) + I k ( η ( t k ) ) + L k η ( t k ) ] .
Let t = T ; we get
x ( T ) = x ( 0 ) e 0 T M ( r ) d r + 0 T e T s M ( r ) d r [ σ η ( s ) ( L x ) ( s ) ] d s + 0 < t k < T e T t k M ( r ) d r [ L k x ( t k ) + I k ( η ( t k ) ) + L k η ( t k ) ] .
Since M 1 x ( 0 ) M 2 x ( t ) = A η , we obtain
x ( 0 ) = A η M 1 M 2 e 0 T M ( r ) d r + M 2 M 1 M 2 e 0 T M ( r ) d r 0 T e T s M ( r ) d r [ σ η ( s ) ( L x ) ( s ) ] d s + M 2 M 1 M 2 e 0 T M ( r ) d r 0 < t k < T e T t k M ( r ) d r [ L k x ( t k ) + I k ( η ( t k ) ) + L k η ( t k ) ] .
Then
x ( t ) = A η M 1 M 2 e 0 T M ( r ) d r + 0 T G ( t , s ) [ σ η ( s ) ( L x ) ( s ) ] d s + 0 < t k < T G ( t , t k ) [ L k x ( t k ) + I k ( η ( t k ) ) + L k η ( t k ) ] ,
where
G ( t , s ) = M 2 M 1 M 2 e 0 T M ( r ) d r e 0 t M ( r ) d r e s T M ( r ) d r + e s t M ( r ) d r , 0 s < t T , M 2 M 1 M 2 e 0 T M ( r ) d r e 0 t M ( r ) d r e s T M ( r ) d r , 0 t s T .
We see that if x ( t ) is a solution to (4), then x ( t ) is also a solution to (5). The proof is complete. □
Apparently, G ( t , s ) = max | M 1 M 1 M 2 e 0 T M ( r ) d r | , | M 2 M 1 M 2 e 0 T M ( r ) d r | . In the remainder of the paper, we denote G ( t , s ) = τ .
Lemma 3.
Let L C ( E , E ) be a positive linear operator; M C ( J , [ 0 , + ) ) , M 1 h e 0 T M ( r ) d r and
τ L T + k = 1 m | L k | < 1 .
Then, problem (4) has a unique solution.
Proof. 
For any x E 0 , let us define
( F x ) ( t ) = A η e 0 t M ( r ) d r M 1 M 2 e 0 T M ( r ) d r + 0 T G ( t , s ) [ σ η ( s ) ( L x ) ( s ) ] d s + 0 < t k < t G ( t , t k ) [ L k x ( t k ) + I k ( η ( t k ) ) + L k η ( t k ) ] ,
and G ( t , s ) , as defined in Lemma 4. For any x 1 , x 2 E 0 , we have
F x 1 F x 2 E 0 τ L T + k = 1 m | L k | x 2 x 1 .
Hence, by the Banach contraction principle, there exists a function x E 0 such that x = F x . Apparently, x is also the unique solution to (4), thus concluding the proof. □
Remark 1.
If M C ( J , [ 0 , + ) ) , M 1 M 2 > 0 , L C ( E , E ) is a positive linear operator, and
M 1 M 1 M 2 e 0 T M ( r ) d r L T + k = 1 m | L k | < 1 .
Then, problem (4) has a unique solution.

3. Extremal Solutions of Problem (1)

A function m Ω is called a lower solution of (1) if
m ( t ) ( R m ) ( t ) , t t k , Δ m ( t k ) I k ( m ( t k ) ) , h ( m ( 0 ) , m ( T ) ) 0 ,
where t J , k = 1 , 2 , , m .
A function n Ω is defined as an upper solution of problem (1) if the above inequalities are reversed.
Theorem 1.
Suppose that (3), (6) hold and R C ( E , E ) . We also assume that
( H 1 )
m , n Ω are the lower and upper solutions of problem (1), respectively, and m ( t ) n ( t ) on J;
( H 2 )
there exist M C ( J , [ 0 , + ) ) , 0 L k < 1 , and k = 1 , 2 , , m , and L C ( E , E ) is a positive linear operator that satisfies
( R x 1 ) ( t ) ( R x 2 ) ( t ) M ( t ) ( x 1 ( t ) x 2 ( t ) ) ( L ( x 1 x 2 ) ) ( t ) ,
for m ( t ) x 1 ( t ) x 2 ( t ) n ( t ) , t J , t t k ;
( H 3 )
the function I k C ( R , R ) satisfies
I k ( x 1 ( t k ) ) I k ( x 2 ( t k ) ) L k ( x 1 ( t k ) x 2 ( t k ) )
for m ( t k ) x 1 ( t k ) x 2 ( t k ) n ( t k ) , k = 1 , 2 , , m ;
( H 4 )
the function h ( x , z ) C ( R × R , R ) satisfies
h ( x ¯ , z ¯ ) h ( x , z ) M 1 ( x ¯ x ) M 2 ( z ¯ z ) ,
wherever m ( 0 ) x x ¯ n ( 0 ) , m ( T ) z z ¯ n ( T ) , where M 1 M 2 > 0 and λ = M 2 M 1 .
Then problem (1) has minimal and maximal solutions within the interval [ m , n ] = { x Ω : m ( t ) x ( t ) n ( t ) , t J } .
Proof. 
For η [ m , n ] , σ η ( t ) = ( R η ) ( t ) + M ( t ) η ( t ) + ( L η ) ( t ) . By Lemma 3, problem (4) has exactly one solution x Ω . When F η = x is used to define an operator F , it possesses the following properties:
(a) m F m , n F n .
Set p ( t ) = m ( t ) m 1 ( t ) , where m 1 = F m . Employing ( H 1 ) , we have
p ( t ) = m ( t ) m 1 ( t ) ( R m ) ( t ) ( R m ) ( t ) + M ( t ) m ( t ) + ( L m ) ( t ) M ( t ) m 1 ( t ) ( L m 1 ) ( t ) = M ( t ) p ( t ) ( L p ) ( t ) , t t k , t J ,
Δ p ( t k ) = Δ m ( t k ) Δ m 1 ( t k ) I k ( m ( t k ) ) L k m 1 ( t k ) + I k ( m 1 ( t k ) ) + L k m ( t k ) = L k p ( t k ) , k = 1 , 2 , , m ,
and
p ( 0 ) = m ( 0 ) m 1 ( 0 ) = m ( 0 ) 1 M 1 h ( m ( 0 ) , m ( T ) ) + M 2 M 1 ( m 1 ( T ) m ( T ) ) + m ( 0 ) = 1 M 1 h ( m ( 0 ) , m ( T ) ) + M 2 M 1 ( m ( T ) m 1 ( T ) ) M 2 M 1 p ( T ) .
From Lemma 1 and M 1 M 2 > 0 , we get p 0 , so m m 1 . Similarly, we have F n n .
(b) Operator F is nondecreasing monotonically.
Let η 1 , η 2 [ m , n ] , such that η 1 η 2 . Assume that u 1 = F η 1 , u 2 = F η 2 and p ( t ) = u 1 ( t ) u 2 ( t ) . Applying the fact of η 1 η 2 , ( H 2 ) , ( H 3 ) and ( H 4 ) , we have
p ( t ) = u 1 ( t ) u 2 ( t ) = M ( t ) p ( t ) ( L p ) ( t ) + ( R η 1 ) ( t ) ( R η 2 ) ( t ) + M ( t ) ( η 1 ( t ) η 2 ( t ) ) + ( L ( η 1 η 2 ) ) ( t ) M ( t ) p ( t ) ( L p ) ( t ) , t t k , t J ,
Δ p ( t k ) = Δ u 1 ( t k ) Δ u 2 ( t k ) = L k p ( t k ) + I k ( η 1 ( t k ) ) + L k η 1 ( t k ) I k ( η 2 ( t k ) ) + L k η 2 ( t k ) L k p ( t k ) , k = 1 , 2 , , m ,
and
p ( 0 ) = u 1 ( 0 ) u 2 ( 0 ) = 1 M 1 h ( η 1 ( 0 ) , η 1 ( T ) ) + M 2 M 1 ( u 1 ( T ) η 1 ( T ) ) + η 1 ( 0 ) 1 M 1 h ( η 2 ( 0 ) , η 2 ( T ) ) + M 2 M 1 ( u 2 ( T ) η 2 ( T ) ) + η 2 ( 0 ) M 2 M 1 p ( T ) .
From Lemma 1, p 0 implies F η 1 F η 2 .
Now, we can define the sequences { m n ( t ) } , { n n ( t ) } with m 0 = m , n 0 = n , such that m n + 1 = F m n , n n + 1 = F n n . Due to (a) and (b), we achieve
m 0 m 1 m 2 m n n n n 2 n 1 n 0 .
Obviously, the sequence { m n } is increasing and bounded, as one has { m n } converging ϱ uniformly on J. Similarly, we know that there is lim n n n ( t ) = ς ( t ) uniformly on J. Passing to the limit when n , we find that ϱ ( t ) , ς ( t ) are solutions to (1).
To testify that ϱ and ς are extremal solutions of (1), consider an arbitrary solution x of (1) satisfying x [ m , n ] . Now, if m n x , it is easy to obtain m n + 1 = F m n F x = x based on the property of F , and by the induction, one reaches m n x for every n N . Thus, passing to the limit, ϱ x .
The same arguments demonstrate that x ς . □
Example 1.
Consider the following problem:
x ( t ) = t 3 x ( t ) 1 10 t 0 t x ( s ) d s , t J = [ 0 , 1 ] , t 1 3 Δ x ( t k ) = 1 15 x 2 ( 1 3 ) , x ( 0 ) x ( t ) + 1 6 x 2 ( 0 ) = 0 .
Setting m = 0 , n = 10 . Evidently, m and n are lower and upper solutions with m n . Taking ( R x ) ( t ) = t 3 x ( t ) 1 10 t 0 t x ( s ) d s and h ( x , y ) = x y + 1 6 x 2 , then assumption (3), (6), ( H 1 ) , ( H 2 ) , ( H 3 ) and ( H 4 ) hold with M ( t ) = t 3 , L k = 1 6 , ( L 1 ) ( t ) = 1 10 , M 1 = 2 , M 2 = 1 , λ = 1 2 , T = 1 . By Theorem (1), the problem (1) has extremal solutions within the interval [ m , n ] .

4. Extremal Quasi-Solutions of Problem (1)

This part aims to verify that the monotone iterative technique remains applicable. It will be used for establishing the existence of extremal quasi-solutions.
Definition 2.
Functions x , z Ω are called coupled lower and upper solutions of (1) if
x ( t ) ( R x ) ( t ) , t t k , t J , Δ x ( t k ) I k ( x ( t k ) ) , k = 1 , 2 , , m , h ( x ( 0 ) , z ( T ) ) 0 ,
and
z ( t ) ( R z ) ( t ) , t t k , t J , Δ z ( t k ) I k ( z ( t k ) ) , k = 1 , 2 , , m , h ( z ( 0 ) , x ( T ) ) 0 .
Definition 3.
Functions U , V Ω are called quasi-solutions of (1) if
U ( t ) = ( R U ) ( t ) , t t k , t J , Δ U ( t k ) = I k ( U ( t k ) ) , k = 1 , 2 , , m , h ( U ( 0 ) , V ( T ) ) = 0
and
V ( t ) = ( R V ) ( t ) , t t k , t J , Δ V ( t k ) = I k ( V ( t k ) ) , k = 1 , 2 , , m , h ( V ( 0 ) , U ( T ) ) = 0 .
Definition 4.
A paired quasi-solution ( U , V ) of problems (1) is said to be the minimal and maximal quasi-solution of (1) if ϱ U , V ς . If both the minimal and maximal solutions are present, they are referred to as the extremal quasi-solutions of problem (1).
Theorem 2.
Assume that ( H 2 ) , ( H 3 ) , (3) and (6) hold. Moreover,
( A 1 ) R C ( E , E ) is a causal operator, and x , z Ω are coupled lower and upper solutions of (1) with x ( t ) z ( t ) , t J ;
( A 2 ) there exist M 1 , M 2 , such that M 1 M 2 > 0 , h ( x , z ) C ( R × R , R ) is nonincreasing in the first variable, satisfying
h ( x , z ¯ ) h ( x , z ) M 2 ( z ¯ z ) , if x ( T ) z z ¯ z ( T ) .
Then, problem (1) has extremal quasi-solutions within the internal [ x , z ] = { x Ω : x ( t ) x ( t ) z ( t ) , t J } .
Proof. 
Let
x n ( t ) + M ( t ) x n ( t ) + ( L x n ) ( t ) = ( R x n 1 ) ( t ) + M ( t ) x n 1 ( t ) + ( L x n 1 ) ( t ) , t J 0 , Δ x n ( t k ) + L k x n ( t k ) = I k ( x n 1 ( t k ) ) + L k x n 1 ( t k ) , k = 1 , 2 , , m , h ( x n 1 ( 0 ) , z n 1 ( T ) ) + M 1 ( x n ( 0 ) x n 1 ( 0 ) ) M 2 ( x n ( T ) x n 1 ( T ) ) = 0 ,
and
z n ( t ) + M ( t ) z n ( t ) + ( L z n ) ( t ) = ( R z n 1 ) ( t ) + M ( t ) z n 1 ( t ) + ( L z n 1 ) ( t ) , t J 0 , Δ z n ( t k ) + L k z n ( t k ) = I k ( z n 1 ( t k ) ) + L k z n 1 ( t k ) , k = 1 , 2 , , m , h ( z n 1 ( 0 ) , x n 1 ( T ) ) + M 1 ( z n ( 0 ) z n 1 ( 0 ) ) M 2 ( z n ( T ) z n 1 ( T ) ) = 0 ,
for n = 1 , 2 , , where x 0 = x , z 0 = z .
It follows from Lemma 3 that both (9) and (10) have a unique solution, respectively. Now, we conclude the proof through three steps.
Step 1: One proves that x n 1 x n and z n z n 1 , n = 1 , 2 , .
Set p ( t ) = x ( t ) x 1 ( t ) . Employing ( A 1 ) , we have
p ( t ) = x ( t ) x 1 ( t ) ( R x ) ( t ) + M ( t ) x 1 ( t ) + ( L x 1 ) ( t ) ( R x ) ( t ) M ( t ) x ( t ) ( L x ) ( t ) = M ( t ) p ( t ) ( L p ) ( t ) , t t k , t J ,
Δ p ( t k ) = Δ x ( t k ) Δ x 1 ( t k ) I k ( x ( t k ) ) + L k x 1 ( t k ) I k ( x ( t k ) ) L k x ( t k ) , = L k p ( t k ) , k = 1 , 2 , , m ,
and
p ( 0 ) = x ( 0 ) x 1 ( 0 ) = 1 M 1 h ( x ( 0 ) , z ( T ) ) M 2 M 1 ( x 1 ( T ) x ( T ) ) M 2 M 1 p ( T ) .
From Lemma 1 and M 2 M 1 > 0 , we get p 0 , so x x 1 .
Utilizing the induction, we deduce that the sequence { x n } is monotonically nondecreasing. Analogously, one finds that { z n } is monotonically nonincreasing.
Step 2: We demonstrate that x 1 z 1 if x z .
Let p ( t ) = x 1 ( t ) z 1 ( t ) . Using ( H 2 ) , ( H 3 ) and ( A 2 ) , we get
p ( t ) = x 1 ( t ) z 1 ( t ) = M ( t ) x 1 ( t ) ( L x 1 ) ( t ) + ( R x ) ( k ) + M ( t ) x ( t ) + ( L x ) ( t ) + M ( t ) z 1 ( t ) + ( L z 1 ) ( t ) ( R z ) ( t ) M ( t ) z ( t ) ( L z ) ( t ) M ( t ) p ( t ) ( L p ) ( t ) , t t k , t J ,
Δ p ( t k ) = Δ x 1 ( t k ) Δ z 1 ( t k ) = L k x 1 ( t k ) + I k ( x ( t k ) ) + L k x ( t k ) + L k z 1 ( t k ) I k ( z ( t k ) ) L k z ( t k ) L k p ( t k ) , k = 1 , 2 , , m ,
and
p ( 0 ) = x 1 ( 0 ) z 1 ( 0 ) = 1 M 1 h ( x ( 0 ) , z ( T ) ) + M 2 M 1 ( x 1 ( t ) x ( T ) ) + x ( 0 ) 1 M 1 h ( z ( 0 ) , x ( T ) ) + M 2 M 1 ( z 1 ( T ) z ( T ) ) + z ( 0 ) M 2 M 1 p ( t ) .
Subsequently, by Lemma 1, one has p 0 , which indicates x 1 z 1 .
Next, we will prove that x 1 , z 1 are coupled lower and upper solutions for (1). Using the assumptions ( H 2 ) , ( H 3 ) , ( A 2 ) , and x x 1 , z 1 z , we obtain
x 1 ( t ) = ( R x ) ( t ) + M ( t ) ( x ( t ) x 1 ( t ) ) + ( L ( x x 1 ) ) ( t ) ( R x 1 ) ( t ) , Δ x 1 ( t k ) = L k ( x ( t k ) x 1 ( t k ) ) + I k ( x ( t k ) ) I k ( x 1 ( t k ) ) , h ( x 1 ( 0 ) , z 1 ( T ) ) h ( x ( 0 ) , z 1 ( T ) ) h ( x ( 0 ) , z ( T ) ) + M 2 ( z 1 ( T ) z ( T ) ) 0 .
It is evident that x 1 is a coupled lower solution to (1). Analogously, z 1 ( t ) is a coupled upper solution to (1). Using the induction, one has x n z n , n = 1 , 2 , .
Step 3: Based on the above two steps, it can be seen that
x 0 x 1 x 2 x n z n z 2 z 1 z 0 ,
and each x n , z n satisfy (9) and (10). Apparently, { x n } , { x n } are uniformly bounded and equicontinuous, using the Ascoli–Arzela theorem and passing to the limit when n ; we find that ϱ , ς satisfy the following equations
ϱ ( t ) = ( R ϱ ) ( t ) , t t k , t J , Δ ϱ ( t k ) = I k ( ϱ ( t k ) ) , k = 1 , 2 , , m , h ( ϱ ( 0 ) , ς ( T ) ) = 0 ,
and
ς ( t ) = ( R ς ) ( t ) , t t k , t J , Δ ς ( t k ) = I k ( ς ( t k ) ) , k = 1 , 2 , , m , h ( ς ( 0 ) , ϱ ( T ) ) = 0 .
It proves that ϱ , ς are the extremal quasi-solutions of problem (1). This ends the proof. □
Example 2.
Consider the problem of
x ( t ) = t 2 x ( t ) 1 10 t 2 0 t s x ( s ) d s , t J = [ 0 , 1 ] , t 1 3 Δ x ( t k ) = 1 15 x 2 ( 1 3 ) , x ( 0 ) + x ( t ) 1 = 0 .
Setting x = 0 , z = 1 2 t + 1 , t [ 0 , 1 3 ) , 1 2 t + 1 4 , t ( 1 3 , 1 ] . Clearly, x is a coupled lower solution, and z is a coupled upper solution with x z . It is easy to see that (3), (6), ( A 1 ) , ( H 2 ) , ( H 3 ) and ( A 2 ) hold with M = t 2 , L k = 1 15 , ( L x ) ( t ) = 1 10 t 2 0 t s x ( s ) d s , ( L 1 ) ( t ) = 1 20 , M 1 = 2 , M 2 = 1 , λ = 1 2 , T = 1 . From Theorem (2), the problem (11) has extremal quasi-solutions within the interval [ x , z ] .

5. Weakly Coupled Quasi-Solutions of Problem (1)

This section establishes the existence of the weakly coupled extremal quasi-solutions of (1).
Definition 5.
u , w Ω are said to be weakly coupled lower and upper solutions of (1) if
u ( t ) ( R u ) ( t ) , t t k , t J , Δ u ( t k ) I k ( u ( t k ) ) , k = 1 , 2 , , m , h ( w ( 0 ) , u ( T ) ) 0 ,
and
w ( t ) ( R w ) ( t ) , t t k , t J , Δ w ( t k ) I k ( w ( t k ) ) , k = 1 , 2 , , m , h ( u ( 0 ) , w ( T ) ) 0 .
Definition 6.
A pair ( Φ , Ψ ) , Φ , Ψ Ω , is called a weakly coupled quasi-solution of (1) if
Φ ( t ) = ( R Φ ) ( t ) , t t k , t J , Δ Φ ( t k ) = I k ( Φ ( t k ) ) , k = 1 , 2 , , m , h ( Ψ ( 0 ) , Φ ( T ) ) = 0 ,
and
Ψ ( t ) = ( R Ψ ) ( t ) , t t k , t J , Δ Ψ ( t k ) = I k ( Ψ ( t k ) ) , k = 1 , 2 , , m , h ( Φ ( 0 ) , Ψ ( T ) ) = 0 .
Definition 7.
For every weakly coupled quasi-solution ( Φ , Ψ ) of (1), a weakly coupled quasi-solution ( ϱ , ς ) is said to be weakly coupled minimal and maximal quasi-solutions of (1) if ϱ ( t ) Φ ( t ) , Ψ ( t ) ς ( t ) for all t J . If both minimal and maximal solutions are present, they are called weakly coupled extremal quasi-solutions of problem (1).
Theorem 3.
 Suppose that ( H 2 ) , ( H 3 ) , (3), (6) hold. Furthermore, it is assumed that 
  • ( B 1 )   u , w Ω are weakly coupled lower and upper solutions of (1) with u ( t ) w ( t ) on J;
  • ( B 2 )  there exist M 1 , M 2 , such that M 1 M 2 > 0 . Moreover, h ( x , z ) C ( R × R , R ) is nondecreasing with respect to the first variable and satisfies
    h ( x , z ¯ ) h ( x , z ) M 2 ( z ¯ z ) , i f u ( T ) z z ¯ w ( T ) .
Then, (1) has weakly coupled extremal quasi-solutions within the interval [ u , w ] = { x Ω : u ( t ) x ( t ) w ( t ) , t J } .
Proof. 
Let
u n ( t ) + M ( t ) u n ( t ) + ( L u n ) ( t ) = ( R u n 1 ) ( t ) + M ( t ) u n 1 ( t ) + ( L u n 1 ) ( t ) , t J 0 , Δ u n ( t k ) + L k u n ( t k ) = I k ( u n 1 ( t k ) ) + L k u n 1 ( t k ) , k = 1 , 2 , , m , h ( w n 1 ( 0 ) , u n 1 ( T ) ) + M 1 ( u n ( 0 ) u n 1 ( 0 ) ) M 2 ( u n ( T ) u n 1 ( T ) ) = 0 ,
and
w n ( t ) + M ( t ) w n ( t ) + ( L w n ) ( t ) = ( R w n 1 ) ( t ) + M ( t ) w n 1 ( t ) + ( L w n 1 ) ( t ) , t J 0 , Δ w n ( t k ) + L k w n ( t k ) = I k ( w n 1 ( t k ) ) + L k w n 1 ( t k ) , k = 1 , 2 , , m , h ( u n 1 ( 0 ) , w n 1 ( T ) ) + M 1 ( w n ( 0 ) w n 1 ( 0 ) ) M 2 ( w n ( T ) w n 1 ( T ) ) = 0 ,
for n = 1 , 2 , , where u 0 = u , w 0 = w .
With regard to Lemma 3, u , w are well defined. Firstly, prove that u 0 u 1 w 1 w 0 .
Let p ( t ) = u 0 ( t ) u 1 ( t ) , applying ( B 1 ) ; we have
p ( t ) = u 0 ( t ) u 1 ( t ) ( R u 0 ) ( t ) ( R u 0 ) ( t ) + M ( t ) u 0 ( t ) + ( L u 0 ) ( t ) M ( t ) u 1 ( t ) ( L u 1 ) ( t ) = M ( t ) p ( t ) ( L ) p ( t ) , t t k , t J ,
Δ p ( t k ) = Δ u 0 ( t k ) Δ u 1 ( t k ) I k ( u 0 ( t k ) ) + L k u 1 ( t k ) I k ( u 0 ( t k ) ) L k u 0 ( t k ) , = L k p ( t k ) , k = 1 , 2 , , m ,
and
p ( 0 ) = u 0 ( 0 ) 1 M 1 h ( w 0 ( 0 ) , u 0 ( T ) ) + M 2 M 1 ( u 1 ( T ) u 0 ( T ) ) + u 0 ( 0 ) = 1 M 1 h ( w 0 ( 0 ) , u 0 ( T ) ) + M 2 M 1 ( u 0 ( T ) u 1 ( T ) ) M 2 M 1 p ( T ) .
By Lemma 1, we obtain p ( t ) 0 with t J , that is, u 0 u 1 . Similar arguments prove that w 1 w 0 .
Now, set p ( t ) = u 1 ( t ) w 1 ( t ) . Using ( H 2 ) , ( H 3 ) , we get
p ( t ) = u 1 ( t ) w 1 ( t ) = M ( t ) u 1 ( t ) ( L u 1 ) ( t ) + ( R u 0 ) ( t ) + M ( t ) u 0 ( t ) + ( L u 0 ) ( t ) M ( t ) w 1 ( k ) ( L w 1 ) ( t ) + ( R w 0 ) ( t ) + M ( t ) w 0 ( t ) + ( L w 0 ) ( t ) M ( t ) p ( t ) ( L p ) ( t ) , t t k , t J ,
and
Δ p ( t k ) = Δ u 1 ( t k ) Δ w 1 ( t k ) = L k u 1 ( t k ) + I k ( u ( t k ) ) + L k u ( t k ) + L k w 1 ( t k ) I k ( w ( t k ) ) L k w ( t k ) L k p ( t k ) , k = 1 , 2 , , m .
Noticing u 0 w 0 and ( B 2 ) , we obtain
p ( 0 ) = u 1 ( 0 ) w 1 ( 0 ) = 1 M 1 h ( w 0 ( 0 ) , u 0 ( T ) ) + M 2 M 1 ( u 1 ( T ) u 0 ( T ) ) + u 0 ( 0 ) [ 1 M 1 h ( u 0 ( 0 ) , w 0 ( T ) ) + M 2 M 1 ( w 1 ( T ) w 0 ( T ) ) + w 0 ( 0 ) ] M 2 M 1 p ( T ) .
Lemma 1 implies that p ( t ) 0 , t J , i.e., u 1 w 1 .
Using H 2 , H 3 , B 2 and the fact of u 0 u 1 , w 1 w 0 , we obtain
u 1 ( t ) = ( R u 0 ) ( t ) + M ( t ) ( u 0 ( t ) u 1 ( t ) ) + ( L ( u 0 u 1 ) ) ( t ) ( R u 1 ) ( t ) , t t k , t J , Δ u 1 ( t k ) = L k ( u 0 ( t k ) u 1 ( t k ) ) + I k ( u 0 ( t k ) ) I k ( u 1 ( t k ) ) , k = 1 , 2 , , m , h ( w 1 ( 0 ) , u 1 ( T ) ) h ( w 0 ( 0 ) , u 1 ( T ) ) h ( w 0 ( 0 ) , u 0 ( T ) ) M 2 ( u 1 ( T ) u 0 ( T ) ) 0 ,
and u 1 is a weakly coupled lower solution of (1). Similarly,
w 1 ( t ) = ( R w 0 ) ( t ) + M ( t ) ( w 0 ( t ) w 1 ( t ) ) + ( L ( w 0 w 1 ) ) ( t ) ( R w 1 ) ( t ) , t t k , t J , Δ w 1 ( t k ) = L k ( w 0 ( t k ) w 1 ( t k ) ) + I k ( w 0 ( t k ) ) I k ( w 1 ( t k ) ) , k = 1 , 2 , , m , h ( u 1 ( 0 ) , w 1 ( T ) ) h ( u 0 ( 0 ) , w 1 ( T ) ) h ( u 0 ( 0 ) , w 0 ( T ) ) M 2 ( w 1 ( T ) w 0 ( T ) ) 0 .
and w 1 is a weakly coupled upper solution of (1).
Now, when we define the sequences { u n } , { w n } by (12) and (13), we have
u 0 u 1 u 2 u n w n w 2 w 1 w 0 .
It is easy to see that there exist ϱ , ς , such that lim n u ( t ) = ϱ ( t ) , lim n w ( t ) = ς ( t ) uniformly on J by the induction, and ϱ , ς are weakly coupled quasi-solutions of problem (1).
Next, we prove that ϱ , ς are weakly coupled extremal quasi-solutions of (1). Let x 1 , x 2 be any weakly coupled quasi-solution of (1) on [ u 0 , w 0 ] . Apparently, if u n u 1 , u 2 w n , we can see that u n + 1 x 1 by considering p = u n + 1 x 1 , and, employing Lemma 1 and the induction, one reaches u n x 1 for all n. Taking the limit as n , we can conclude that ϱ x 1 . A similar argument leads to x 1 ς and ϱ x 2 ς . This ends the proof. □
Example 3.
Consider the problem of
x ( t ) = t 6 x ( t ) 1 3 t 0 t 2 x ( s ) d s , t J = [ 0 , 1 ] , t 1 2 Δ x ( t k ) = 1 4 x ( 1 2 ) , x 2 ( 0 ) + x ( 0 ) x ( 1 ) = 0 .
Let u = 0 , w = t , t [ 0 , 1 2 ) , 0 , t ( 1 2 , 1 ] . Clearly, u is a weakly coupled lower solution and w is a weakly coupled upper solution with v w . It is easy to see that (3), (6), ( B 1 ) , ( H 2 ) , ( H 3 ) and ( B 2 ) hold with M ( t ) = t 6 , L k = 1 4 , ( L x ) ( t ) = 1 3 t 0 t 2 x ( s ) d s , ( L 1 ) ( t ) = t 3 , M 1 = 2 , M 2 = 1 , λ = 1 2 , T = 1 . From Theorem (3), the problem (11) has weakly coupled extremal quasi-solutions in [ u , w ] .

6. Conclusions

This study extends the notion of casual operators to impulsive differential equations with nonlinear periodic boundary conditions. Under the assumption of the existence of (coupled or weakly coupled) upper and lower solutions, we applied the monotone iterative technique to prove the existence of extremal solutions, extremal quasi-solutions, and weakly coupled extremal quasi-solutions for impulsive differential equations.
To validate and illustrate the theoretical results obtained, we present three different examples that summarise the essence of our findings. These examples are carefully designed to demonstrate the applicability of our theoretical framework and the effectiveness of the monotone iterative technique in the context of nonlinear impulsive differential equations with causal operators. Each example is analyzed in detail, and the results are shown to be consistent with our theoretical investigation. Through these examples, we highlight the significance of our work in advancing the study of impulsive differential equations and providing new insights into their solution behaviour.

Author Contributions

Statement of the problem and methodology, W.W.; writing—original draft preparation, J.B.; writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

The authors are very grateful to each reviewer for their careful reading and valuable comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bainov, D.D.; Lakshmikantham, V.; Simeonov, P.S. Theory of Impulsive Differential Equations; World Scientific: Singapore, 1989. [Google Scholar]
  2. Haddad, W.M.; Chellaboina, V.S.; Nersesov, S.G. Impulsive and Hybrid Dynamical Systems: Stability, Dissipativity, and Control; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
  3. Chen, L.; Sun, J. Nonlinear boundary value problem of first order impulsive functional differential equations. J. Math. Anal. Appl. 2006, 318, 726–741. [Google Scholar] [CrossRef]
  4. Suresh, S.; Thamizhendhi, G. Some results on fractional semilinear impulsive integro-differential equations. Malaya J. Mat. 2019, 7, 259–263. [Google Scholar] [CrossRef] [PubMed]
  5. Thiam, P.A.; Dione, D.; Bodjrenou, F.; Diop, M.A. A note on existence results for noninstantaneous impulsive integrodifferential systems. Res. Math. 2024, 11, 2335700. [Google Scholar] [CrossRef]
  6. Shah, K.; Abdalla, B.; Abdeljawad, T.; Gul, R. Analysis of multipoint impulsive problem of fractional-order differential equations. Bound. Value Probl. 2023, 2023, 1. [Google Scholar] [CrossRef]
  7. Suo, L.; Fečkan, M.; Wang, J.R. Existence of periodic solutions to quaternion-valued impulsive differential equations. Qual. Theor. Dyn. Syst. 2023, 22, 1. [Google Scholar] [CrossRef]
  8. Al Nuwairan, M.; Ibrahim, A.G. Nonlocal impulsive differential equations and inclusions involving Atangana-Baleanu fractional derivative in infinite dimensional spaces. Aims Math. 2023, 8, 11752–11780. [Google Scholar] [CrossRef]
  9. Xia, M.; Liu, L.; Fang, J.; Zhang, Y. Stability analysis for a class of stochastic differential equations with impulses. Mathematics 2023, 11, 1541. [Google Scholar] [CrossRef]
  10. Mailleret, L.; Grognard, F. Global stability and optimisation of a general impulsive biological control model. Math. Biosci. 2009, 221, 91–100. [Google Scholar] [CrossRef] [PubMed]
  11. Liu, S.; Wang, J.R.; Zhou, Y. Optimal control of noninstantaneous impulsive differential equations. J. Frankl. Inst. 2017, 354, 7668–7698. [Google Scholar] [CrossRef]
  12. Xing, B.; Liu, H.; Tang, X.; Shi, L. Neural network methods based on efficient optimization algorithms for solving impulsive differential equations. IEEE Trans. Artif. Intell. 2022, 5, 1067–1076. [Google Scholar] [CrossRef]
  13. Lakshmikantham, V.; Leela, S.; Drici, Z.; McRae, F.A. Theory of Causal Differential Equations; World Scientific Press: Paris, France, 2009. [Google Scholar]
  14. Geng, F. Differential equations involving causal operators with nonlinear periodic boundary conditions. Math. Comput. Model. 2008, 48, 859–866. [Google Scholar] [CrossRef]
  15. Jankowski, T. Boundary value problems with causal operators. Nonlinear Anal. Theory Methods Appl. 2008, 68, 3625–3632. [Google Scholar] [CrossRef]
  16. Corduneanu, C. Functional Equations with Causal Oprators; Taylor and Francis: New York, NY, USA, 2003. [Google Scholar]
  17. Drici, Z.; McRae, F.A.; Devi, J.V. Differential equations with causal operators in a Banach space. Nonlinear Anal. Theory Methods Appl. 2005, 62, 301–313. [Google Scholar] [CrossRef]
  18. Jankowski, T. Boundary value problems for difference equations with causal operators. Appl. Math. Comput. 2011, 218, 2549–2557. [Google Scholar] [CrossRef]
  19. Zhao, Y.; Song, G.; Sun, X. Integral boundary value problems with causal operators. Comput. Math. Appl. 2010, 59, 2768–2775. [Google Scholar] [CrossRef]
  20. Jabeen, T.; Agarwal, R.P.; O’Regan, D.; Vasile, L. Impulsive functional differential equations with causal operators. Dyn. Syst. Appl. 2017, 26, 411–424. [Google Scholar]
  21. Chen, L.; Sun, J. Nonlinear boundary value problem of first order impulsive integro-differential equations. J. Comput. Appl. Math. 2007, 202, 392–401. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, W.; Bao, J. Existence Results for Nonlinear Impulsive System with Causal Operators. Mathematics 2024, 12, 2755. https://doi.org/10.3390/math12172755

AMA Style

Wang W, Bao J. Existence Results for Nonlinear Impulsive System with Causal Operators. Mathematics. 2024; 12(17):2755. https://doi.org/10.3390/math12172755

Chicago/Turabian Style

Wang, Wenli, and Junyan Bao. 2024. "Existence Results for Nonlinear Impulsive System with Causal Operators" Mathematics 12, no. 17: 2755. https://doi.org/10.3390/math12172755

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop