Next Article in Journal
Soft Open Bases and a Novel Construction of Soft Topologies from Bases for Topologies
Next Article in Special Issue
Exponential Stabilization of Linear Time-Varying Differential Equations with Uncertain Coefficients by Linear Stationary Feedback
Previous Article in Journal
On Hybrid Contractions in the Context of Quasi-Metric Spaces
Previous Article in Special Issue
Powers of the Stochastic Gompertz and Lognormal Diffusion Processes, Statistical Inference and Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Sign of the Green Function of an n-th Order Linear Boundary Value Problem

by
Pedro Almenar Belenguer
1,*,† and
Lucas Jódar
2,†
1
Vodafone Spain, Avda. América 115, 28042 Madrid, Spain
2
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(5), 673; https://doi.org/10.3390/math8050673
Submission received: 28 March 2020 / Revised: 20 April 2020 / Accepted: 23 April 2020 / Published: 29 April 2020

Abstract

:
This paper provides results on the sign of the Green function (and its partial derivatives) of an n-th order boundary value problem subject to a wide set of homogeneous two-point boundary conditions. The dependence of the absolute value of the Green function and some of its partial derivatives with respect to the extremes where the boundary conditions are set is also assessed.

1. Introduction

Let J be a compact interval in R and let us consider the real disfocal differential operator L: C n ( J ) C ( J ) defined by
L y = y ( n ) ( x ) + a n 1 ( x ) y ( n 1 ) ( x ) + + a 0 ( x ) y ( x ) , x J ,
where a j ( x ) C ( J ) , 0 j n 1 . Following Eloe and Ridenhour [1], let Ω l be the set whose members are collections of l different ordered integer indices i such that 0 i n 1 , let k N be such that 1 k n 1 , let α Ω k be the set { α 1 , , α k } and β Ω n k be the set { β 1 , , β n k } , both associated to the homogeneous boundary conditions
y ( α i ) ( a ) = 0 , i = 1 , 2 , , k , α i α ,
y ( β i ) ( b ) = 0 , i = 1 , 2 , , n k , β i β ,
where [ a , b ] J . Throughout this paper we will impose the condition that, for any integer m such that 1 m n , at least m terms of the sequence α 1 , , α k , β 1 , , β n k are less than m. Due to their resemblance with the conditions defined by Butler and Erbe in [2], we will call them admissible boundary conditions (note that (2) and (3) are not exactly the same boundary conditions defined by Butler and Erbe since the latter applied to the so-called quasiderivatives of y ( x ) and not to derivatives). In particular, if for every integer m such that 1 m p + 1 , exactly m terms of the sequence α 1 , , α k , β 1 , , β n k are less than m, we will say that the boundary conditions are p-alternate. In the case p = n 1 we will call the boundary conditions strongly admissible. The admissible conditions cover well known cases like conjugate boundary conditions ( α 1 = 0 , α 2 = 1 , , α k = k 1 and β 1 = 0 , β 2 = 1 , , β n k = n k 1 ), focal boundary conditions (right focal with α 1 = 0 , α 2 = 1 , , α k = k 1 and β 1 = k , β 2 = k + 1 , , β n k = n 1 or left focal with α 1 = n k , α 2 = n k + 1 , , α k = n 1 and β 1 = 0 , β 2 = 1 , , β n k = n k 1 ) and many other. The focal boundary conditions are also strongly admissible (or ( n 1 ) -alternate).
The purpose of this paper will be to provide results on the sign of G ( x , t ) , the Green function associated to the problem
L y = 0 , x ( a , b ) , y ( α i ) ( a ) = 0 , α i α ; y ( β i ) ( b ) = 0 , β i β ,
as well as some of its partial derivatives with regards to x, both in the interval ( a , b ) and at the extremes a and b. We will also analyze the dependence of the absolute value of G ( x , t ) and its derivatives with respect to the extremes a and b. In this sense, this paper represents an extension of the work by Eloe and Ridenhour [1] which in turn extended previous results from Peterson [3,4], Elias [5] and Peterson and Ridenhour [6]. Note that the disfocality of L on [ a , b ] , according to Nehari [7], implies that y ( x ) 0 is the only solution of L y = 0 satisfying y ( i ) ( x i ) = 0 , i = 0 , 1 , 2 , , n 1 , with x i [ a , b ] , and also guarantees the existence of the Green function of (4).
It is well known (see for instance [8], Chapter 3) that problems of the type
L y = f , x ( a , b ) , y ( α i ) ( a ) = 0 , α i α ; y ( β i ) ( b ) = 0 , β i β ,
with f C [ a , b ] being an input function, have a solution given by y ( x ) = a b G ( x , t ) f ( t ) d t . Therefore, the knowledge of the sign of G ( x , t ) and its derivatives can provide information on the sign of the solution y ( x ) and these same derivatives, at least when f does not change sign on ( a , b ) . This was already used by Eloe and Ridenhour in [1] to show that a clamped beam is stiffer that a simply supported beam. Likewise, the evolution of G ( x , t ) as a or b vary can also provide insights on the dependence of the value of y ( x ) on these extremes and can allow comparing the effect of a longer separation of the extremes when the same input function f is applied to a system modeled by (5).
The knowledge about the sign of G ( x , t ) is also useful to find information about the eigenvalues and eigenfunctions of the general problem
L y = λ l = 0 μ c l ( x ) y ( l ) ( x ) , x ( a , b ) , y ( α i ) ( a ) = 0 , α i α ; y ( β i ) ( b ) = 0 , β i β ,
with μ n 1 , c l ( x ) C ( J ) for 0 l μ . These problems are tackled by converting them in the equivalent integral problem
M y ( x ) = 1 λ y ( x ) , x [ a , b ] ,
where M is the operator M: C μ [ a , b ] C n [ a , b ] defined by
M y ( x ) = a b G ( x , t ) l = 0 μ c l ( t ) y ( l ) ( t ) d t , x [ a , b ] .
If the partial derivative of G ( x , t ) of the highest order whose sign is constant on ( a , b ) is not lower than μ , it is possible to define a cone P associated to that partial derivative such that M P P and, with the help of the cone theory elaborated by Krein and Rutman [9] and Krasnosel’skii [10], prove that there exists a solution of (7) associated to the smallest eigenvalue λ . Moreover, it is possible to determine some properties of λ and even compare the values of λ for different boundary conditions. Refs. [11,12,13,14,15,16,17] are examples that follow this approach. In all these, therefore, the knowledge of the sign of the derivatives of G ( x , t ) is critical.
The non-linear version of (6), namely
L y = λ f ( y , x ) , x ( a , b ) ,
subject to different homogeneous, mixed or integral boundary conditions (see for instance [18,19]), is also addressed usually by converting it in the integral problem
1 λ y = a b G ( x , t ) f ( y ( t ) , t ) d t , x ( a , b ) .
In most of these problems, the information about the sign of the Green function is relevant to apply other tools (fixed-point theorems, upper and lower solutions method, fixed-point index theory, etc.) to determine the existence of a solution. In some of them, the knowledge of the sign of the partial derivatives can help to achieve the same goal ([18,20,21]).
As for a physical applicability, problems of the type (5), (6) and (9) appear in many situations, like the study of the deflections of beams, both straight ones with non-homogeneous cross-sections in free vibration (which are subject to the fourth-order linear Euler-Bernoulli equation) and curved ones with different shapes. An account of these and other applications can be found in [22], Chapter IV.
Throughout the paper we will use the terms G ( α , β , x , t ) and G a , b ( x , t ) (and further G a , b ( α , β , x , t ) ) when we want to highlight the dependence of the Green function of (4) on the boundary conditions ( α , β ) and the extremes a, b, respectively. That will be particularly useful when we manipulate Green functions subject to different boundary conditions or different extremes. We will denote by H ( x , t ) and I ( x , t ) the partial derivatives of G ( α , β , x , t ) with respect to the extreme b and a, respectively, that is
H ( x , t ) = G ( x , t ) b , I ( x , t ) = G ( x , t ) a , ( x , t ) [ a , b ] × [ a , b ] .
We will say that a, b are interior to A, B if A a < b B and A < a or b < B . We will use the expression c a r d { D } to denote the number of elements (or cardinal) of the set D.
Likewise, if we assume that y is a function with ( n 1 ) t h derivative in [ a , b ] , we will make use of the following nomenclature associated to ( α , β ) :
  • K ( α , β ) is the minimum derivative of y ( x ) for which the boundary conditions ( α , β ) specify that y ( i ) ( a ) = 0 or y ( i ) ( b ) = 0 for i = K ( α , β ) + 1 , , n 1 , with K ( α , β ) = n 1 if both y ( n 1 ) ( a ) 0 and y ( n 1 ) ( b ) 0 .
  • m ( α , i ) is the number of derivatives of y of order equal or higher than i which the boundary conditions α do not specify to be zero at a.
  • n ( β , i ) is the number of derivatives of y of order higher than i which the boundary conditions β do specify to be zero at b.
  • α A α is the greatest index such that y ( j ) ( a ) = 0 for α 1 j α A and y ( j ) ( b ) 0 for α 1 j α A 1 , and β B β is the greatest index such that y ( j ) ( b ) = 0 for β 1 j β B and y ( j ) ( a ) 0 for β 1 j β B 1 . Note that if β B α then the boundary conditions are p-alternate with p > β B , whereas if β B α and β B > 0 then the boundary conditions are ( β B 1 ) -alternate.
  • S ( α ) is the sum of all indices of α . Likewise, S ( β ) is the sum of all indices of β .
To make these definitions clear, let us use some examples. Let us assume that n = 8 , k = 4 , α = { 0 , 1 , 2 , 5 } and β = { 3 , 4 , 5 , 7 } . Then α A = 2 (since 3 α ), β B = 5 (since 6 β but also 5 α ), K ( α , β ) = 6 (since 6 α β and 7 β ), S ( α ) = 0 + 1 + 2 + 5 = 8 and S ( β ) = 3 + 4 + 5 + 7 = 19 . Likewise, let us assume that n = 7 , k = 2 , α = { 3 , 5 } and β = { 0 , 1 , 2 , 4 , 5 } . Then α A = 3 , β B = 2 , K ( α , β ) = 6 , S ( α ) = 8 and S ( β ) = 12 .
As for the organization of the paper, Section 2 will provide the main results of the paper. Concretely, in the Section 2.1 we will tackle the general case of admissible boundary conditions, in the Section 2.2 we will prove some additional results associated to p-alternate boundary conditions and in the Section 2.3 we will cover the strongly admissible boundary conditions. Finally in Section 3 we will elaborate some conclusions.

2. Results

2.1. The Sign of the Green Function and Its Derivatives on the Admissible Case

In this subsection, we will prove some basic results concerning the sign of the Green function of the problem (4) and its derivatives, as well as comparisons of their absolute values when the extremes a and b vary. To this end, it is interesting to recall a couple of results from Eloe and Ridenhour, which we will state (modified slightly using our notations) for completeness.
Theorem 1 
(Theorem 3.3 of [1]).
1. 
If α 1 = 0 , then for i = 0 , , β 1 ,
( 1 ) n k i G ( x , t ) x i > 0 , ( x , t ) ( a , b ) × ( a , b ) .
2. 
If β 1 = 0 , then for i = 0 , , α 1 ,
( 1 ) n k + i i G ( x , t ) x i > 0 , ( x , t ) ( a , b ) × ( a , b ) .
Theorem 2 
(Theorem 3.4 of [1]). Let us suppose that max ( α k , β n k ) < n 1 , and that a 1 , b 1 are extremes interior to a 2 , b 2 , with [ a 2 , b 2 ] J .
1. 
If α 1 = 0 , then for i = 0 , , β 1 ,
( 1 ) n k i G a 2 , b 2 ( x , t ) x i > ( 1 ) n k i G a 1 , b 1 ( x , t ) x i > 0 , ( x , t ) ( a 1 , b 1 ) × ( a 1 , b 1 ) .
2. 
If β 1 = 0 , then for i = 0 , , α 1 ,
( 1 ) n k + i i G a 2 , b 2 ( x , t ) x i > ( 1 ) n k + i i G a 1 , b 1 ( x , t ) x i > 0 , ( x , t ) ( a 1 , b 1 ) × ( a 1 , b 1 ) .
These theorems, although of considerable scope, unfortunately, do not yield information on the sign of all the partial derivatives of G ( x , t ) at the extremes a and b, whose knowledge is necessary for the application of cone theory to the eigenvalue problem (6) mentioned in the Introduction, as well as for the analysis of the strongly admissible case (see Section 2.3). Likewise, they do not cover the dependence of G ( x , t ) with the extremes a and b when either α k or β n k are equal to n 1 . These shortcomings and the lack of explicit proofs of these theorems in [1] (the reader is left to obtain them following the techniques devised by the authors in previous sections of that paper) lead us to dedicate this subsection to reproduce what we suppose were the steps used by Eloe and Ridenhour to obtain Theorems 1 and 2 as well as to prove the missing results (see Remark 2 for some examples of the latter).
We will start with a Lemma that can be considered an extension of [1], Lemma 2.3 to the problem (4). As Eloe and Ridenhour pointed out, [1], Lemma 2.3 was in essence proved by Peterson and Ridenhour in [6] for the case ( α 1 , , α k ) = ( 0 , , k 1 ) .
Lemma 1. 
Let us assume that L is disfocal on [ a , b ] and that y ( x ) is a nontrivial solution of L y = 0 which satisfies the n 1 homogeneous boundary conditions
y ( α i ) ( a ) = 0 , α i α , α Ω k 1 , y ( β i ) ( b ) = 0 , β i β , β Ω n k .
Let us also assume that
c a r d { i : 0 i j 1 , y ( i ) ( a ) = 0 } + c a r d { i : 0 i j 1 , y ( i ) ( b ) = 0 } j , j = 1 , , K ( α , β ) .
Then y ( x ) is essentially unique (to within the norm) and satisfies
1. 
Neither y ( x ) nor any of its derivatives vanish at a or b on derivatives lower than K ( α , β ) + 1 and different from those of (16), that is
y ( i ) ( a ) 0 , i = 0 , , K ( α , β ) , i α ,
y ( i ) ( b ) 0 , i = 0 , , K ( α , β ) , i β .
2. 
y ( i ) ( x ) 0 , x ( a , b ) , 0 i max ( α 1 , β 1 ) . Moreover, if ( α , β ) are p-alternate, y ( i ) ( x ) 0 , x ( a , b ) , 0 i p + 1 .
3. 
If y ( i ) ( a ) = 0 , 0 i K ( α , β ) 1 , there exists an ϵ > 0 such that y ( i ) ( x ) y ( i + 1 ) ( x ) > 0 , x ( a , a + ϵ ) .
4. 
If y ( i ) ( a ) 0 , 0 i K ( α , β ) 1 , there exists an ϵ > 0 such that y ( i ) ( x ) y ( i + 1 ) ( x ) < 0 , x ( a , a + ϵ ) .
5. 
If y ( i ) ( b ) = 0 , 0 i K ( α , β ) 1 , there exists an ϵ > 0 such that y ( i ) ( x ) y ( i + 1 ) ( x ) < 0 , x ( b ϵ , b ) .
6. 
If y ( i ) ( b ) 0 , 0 i K ( α , β ) 1 , there exists an ϵ > 0 such that y ( i ) ( x ) y ( i + 1 ) ( x ) > 0 , x ( b ϵ , b ) .
Proof. 
Following the argumentation of [6], let us denote by l j , r j the following values
l j = c a r d { i : 0 i j 1 , y ( i ) ( a ) = 0 } , 0 j n ,
r j = c a r d { i : 0 i j 1 , y ( i ) ( b ) = 0 } , 0 j n .
We will show by induction that the number of zeroes of y ( j ) ( x ) in the interval ( a , b ) (let us name it z j ( a , b ) ) is at least l j + r j j . For j = 0 it is straightforward, so let us assume that the hypothesis holds for j 1 , that is,
z j 1 ( a , b ) l j 1 + r j 1 j + 1 .
If we consider the possible zeroes of y ( j 1 ) ( x ) at a or b, Rolle’s theorem mandates that
z j ( a , b ) z j 1 ( a , b ) + l j l j 1 + r j r j 1 1
l j 1 + r j 1 j + 1 + l j l j 1 + r j r j 1 1 = l j + r j j .
From the definition of l j , r j , this result also implies that the number of zeroes of y ( j ) ( x ) in [ a , b ] (let us name it z j [ a , b ] ), satisfies
z j [ a , b ] l j + 1 + r j + 1 j .
With this in mind it is immediate to see that the condition (17) translates into
z j [ a , b ] 1 , j = 0 , , K ( α , β ) 1 ,
whereas the definition of K ( α , β ) implies
z j [ a , b ] 1 , j = K ( α , β ) + 1 , , n 1 .
The key insight for the rest of the proof is that any additional zero of y ( i ) ( x ) on [ a , b ] for i = 0 , , K ( α , β ) not forced by the homogeneous boundary conditions nor by Rolle’s theorem will imply, again by Rolle’s theorem, that z K ( α , β ) [ a , b ] = 1 which together with (19) and (20) give
z j [ a , b ] 1 , j = 0 , , n 1 .
Since L is disfocal on [ a , b ] by hypothesis, such an additional zero will mean y 0 . This proves properties 1 and 2 (the p-alternate condition grants that only one homogeneous boundary condition -at either a or b- is set in each derivative up to p-th one, so these boundary conditions cannot force, at least via Rolle’s theorem, any zeroes in ( a , b ) in the derivatives up to the ( p + 1 ) -th one) and also the fact that y is essentially unique to within the norm (if there were two different solutions y 1 and y 2 one could create a non trivial linear combination y 3 of these two with a zero of y 3 ( K ( α , β ) ) in [ a , b ] ).
As for property 3, if i + 1 K ( α , β ) then the number of zeroes of y ( i + 1 ) ( x ) on ( a , b ) must be finite (otherwise from Rolle’s theorem we would end up with a zero of y ( K ( α , β ) ) ( x ) on ( a , b ) and the disfocality of L on [ a , b ] would force y 0 ) and there must be an ϵ > 0 such that y ( i + 1 ) ( x ) 0 on ( a , a + ϵ ) . Since
y ( i ) ( x ) = y ( i ) ( a ) + a x y ( i + 1 ) ( s ) d s = a x y ( i + 1 ) ( s ) d s ,
it must follow that y ( i ) ( x ) y ( i + 1 ) ( x ) > 0 on ( a , a + ϵ ) .
To prove property 4, let x i ( a , b ] be such that y ( i ) ( x i ) = 0 and y ( i ) ( x ) 0 on [ a , x i ) (the existence of x i is granted by (19)). There cannot be any zeroes of y ( i + 1 ) ( x ) on ( a , x i ) since, by the previous argumentation, this would imply again a zero of y ( K ( α , β ) ) ( x ) on ( a , b ) and therefore y 0 . As
y ( i ) ( x ) = y ( i ) ( x i ) y ( i ) ( x ) = x x i y ( i + 1 ) ( s ) d s ,
one gets to y ( i ) ( x ) y ( i + 1 ) ( x ) < 0 on ( a , x i ) .
The proof of properties 5 and 6 is similar to that of properties 3 and 4, respectively. □
Remark 1. 
It is important to stress that the results 3–6 of the previous Lemma only apply if i K ( α , β ) 1 . If y ( K ( α , β ) ) ( x ) 0 on [ a , b ] we cannot deduce anything about the zeroes of higher derivatives of y ( x ) on [ a , b ] , as the disfocality condition would already not be met in y ( K ( α , β ) ) .
The next Theorem extends [1], Lemma 2.4 and Theorem 2.1 to the problem (4).
Theorem 3. 
Let us assume that the boundary conditions ( α , β ) , with α Ω k and β Ω n k , are admissible. Then one has
( 1 ) m ( α , i ) i G ( α , β , a , t ) x i > 0 , 0 i n 1 , i α ,
and
( 1 ) n ( β , i ) i G ( α , β , b , t ) x i > 0 , 0 i n 1 , i β .
In addition:
1. 
If α 1 = 0 then
( 1 ) n k i G ( α , β , x , t ) x i > 0 , 0 i β 1 , x ( a , b ) .
2. 
If β 1 = 0 then
( 1 ) n k + i i G ( α , β , x , t ) x i > 0 , 0 i α 1 , x ( a , b ) .
Proof. 
Let us note first that the admissibility of the boundary conditions imposes that α 1 = 0 or β 1 = 0 .
We will focus initially on the case α 1 = 0 , for which we will follow a similar approach as that used in [1], Lemma 2.4. Thus, as a starting point, let us fix t [ a , b ] and let us consider the boundary conditions ( α , β ) with α = { 0 , , k 1 } , which (as it is straightforward to show) are always admissible regardless of the value of k and β . From [1], Lemma 2.4 one has (22) and from [1], Theorem 2.1 one gets (23) and
( 1 ) n k k G ( α , β , a , t ) x k > 0 .
If k < n 1 , we can pick new boundary conditions ( α , β ) with α = { 0 , , k } and β = β \ β n k (that is β = { β 1 , , β n k 1 } , for which [1], Theorem 2.1 gives again
( 1 ) n k 1 k + 1 G ( α , β , a , t ) x k + 1 > 0 .
We can build the function g 1 ( x ) = G ( α , β , x , t ) G ( α , β , x , t ) , which is n-times continuously differentiable (the difference of the Green functions compensate the discontinuity of their ( n 1 ) -th partial derivatives with regards to x at x = t ) and satisfies
L g 1 = 0 , x ( a , b ) ;
g 1 ( j ) ( a ) = 0 , 0 j k 1 ; g 1 ( β j ) ( b ) = 0 , β j β \ β n k ;
g 1 ( k ) ( a ) = k G ( α , β , a , t ) x k ; g 1 ( β n k ) ( b ) = β n k G ( α , β , b , t ) x β n k .
From (25) and (27) it follows
( 1 ) n k g 1 ( k ) ( a ) < 0 .
The boundary conditions of g 1 are ( α , β ) . It is straighforward to prove that K ( α , β ) = n 1 and that g 1 satisfies the hypothesis (17) of Lemma 1 for 1 , , n 1 . In consequence, one can apply properties 1 and 4 of Lemma 1 to g 1 and, taking (28) into account, get to
( 1 ) n k g 1 ( k + 1 ) ( a ) > 0 .
From here and (26) one has
( 1 ) n k 1 k + 1 G ( α , β , a , t ) x k + 1 = ( 1 ) n k 1 k + 1 G ( α , β , a , t ) x k + 1 ( 1 ) n k 1 g 1 ( k + 1 ) ( a ) > 0 .
This argument can be repeated recursively to obtain
( 1 ) n i i G ( α , β , a , t ) x i > 0 , k i n 1 ,
which is (21).
Next, we will proceed by induction over S ( α ) . Thus, let us consider admissible (but not strongly admissible) boundary conditions ( α , β ) with α Ω k and β Ω n k , and let us define new conditions ( α , β ) by taking α and replacing the homogeneous boundary condition α i by α k + 1 (that is, α specifies y ( α k + 1 ) ( a ) = 0 instead of y ( α i ) ( a ) = 0 ). Let us assume that ( α , β ) are also admissible.
The function g 2 ( x ) = G ( α , β , x , t ) G ( α , β , x , t ) is n-times continuously differentiable and satisfies
L g 2 = 0 , x ( a , b ) ;
g 2 ( α j ) ( a ) = 0 , α j α , α j α i ; g 2 ( β j ) ( b ) = 0 , β j β ;
g 2 ( α i ) ( a ) = α i G ( α , β , a , t ) x α i ; g 2 ( α k + 1 ) ( a ) = α k + 1 G ( α , β , a , t ) x α k + 1 .
Let ( α , β ) be the homogeneous boundary conditions satisfied by g 2 , with α Ω k 1 . We will prove now that
K ( α , β ) = max ( K ( α , β ) , α k + 1 ) ,
and that g 2 complies with the hypotheses of Lemma 1 for K ( α , β ) .
If K ( α , β ) > α k + 1 then K ( α , β ) = K ( α , β ) as the only difference between ( α , β ) and ( α , β ) is precisely α k + 1 . In that case
c a r d { β l β , j < β l n 1 } n 1 j 1 = n j 2
for α k + 1 j < K ( α , β ) , since K ( α , β ) β as per the definition of K ( α , β ) . Following the nomenclature of Lemma 1 and noting that
l n ( α , β ) + r n ( α , β ) = n ,
it follows
l j + 1 ( α , β ) + r j + 1 ( α , β ) n ( n j 2 ) = j + 2 , α k + 1 j < K ( α , β ) ,
which in turn means
l j + 1 ( α , β ) + r j + 1 ( α , β ) j + 1 , α k + 1 j < K ( α , β ) .
Since
l j + 1 ( α , β ) + r j + 1 ( α , β ) = l j + 1 ( α , β ) + r j + 1 ( α , β ) j + 1 , j α k ,
due to the admissibility of ( α , β ) , from (33) and (34) it follows that the condition (17) holds for ( α , β ) and 1 , , K ( α , β ) .
On the other hand, if K ( α , β ) < α k + 1 , since ( α , β ) are admissible there cannot be an order above K ( α , β ) which belongs to α and β at the same time, which implies that the number of boundary conditions above K ( α , β ) is limited by
c a r d { α l α , K ( α , β ) < α l n 1 } + c a r d { β l β , K ( α , β ) < β l n 1 } = n 1 K ( α , β ) ,
and therefore
c a r d { α l α , K ( α , β ) < α l n 1 } + c a r d { β l β , K ( α , β ) < β l n 1 } = n 2 K ( α , β ) .
This means that there must exist an index l with K ( α , β ) + 1 l n 1 such that l α β . That index l must obviously be K ( α , β ) . As the only difference between ( α , β ) and ( α , β ) is precisely α k + 1 , it follows that K ( α , β ) = α k + 1 . The admissibility of ( α , β ) grants that ( α , β ) fulfils the condition (17) of Lemma 1 for 1 , , K ( α , β ) , also in this case K ( α , β ) = α k + 1 .
Moving on, from the induction hypothesis we know that
( 1 ) m ( α , α k + 1 ) α k + 1 G ( α , β , a , t ) x α k + 1 > 0 ,
which together with (31) gives
( 1 ) m ( α , α k + 1 ) g 2 ( α k + 1 ) ( a ) < 0 .
Since the number of derivatives of g 2 between g 2 ( α i ) and g 2 ( α k + 1 ) which are not specified to be zero at a is m ( α , α i ) m ( α , α k + 1 ) + 1 , applying properties Properties 3 and 4 of Lemma 1 to g 2 ( x ) one gets
( 1 ) m ( α , α i ) g 2 ( α i ) ( a ) > 0 ,
that is
( 1 ) m ( α , α i ) α i G ( α , β , a , t ) x α i > 0 .
In a similar manner, for x ( a , a + ϵ )
( 1 ) m ( α , j ) g 2 ( j ) ( x ) > 0 , j α i ,
and since m ( α , j ) = m ( α , j ) for j < α i from the induction hypothesis one obtains
( 1 ) m ( α , j ) j G ( α , β , a , t ) x j = ( 1 ) m ( α , j ) g 2 ( j ) ( a ) + ( 1 ) m ( α , j ) j G ( α , β , a , t ) x j > 0 , j α , j < α i .
Equations (38) and (40) prove (21) for j α i .
Before addressing (21) for α i > j , which will require a different function g 3 , let us focus on (23) and (22), in this order. Thus, from (39), the definition of m ( α , j ) and property property 2 of Lemma 1 it follows
( 1 ) n k g 2 ( i ) ( x ) > 0 , 0 i β 1 , x ( a , b ) .
Since by the induction hypothesis
( 1 ) n k i G ( α , β , x , t ) x i > 0 , 0 i β 1 , x ( a , b ) ,
from (41) and (42) one gets to
( 1 ) n k i G ( α , β , x , t ) x i = ( 1 ) n k g 2 ( i ) ( x ) + ( 1 ) n k i G ( α , β , x , t ) x i > 0 , 0 i β 1 , x ( a , b ) ,
which is (23).
On the other hand, (41) also implies ( 1 ) n k g 2 ( β 1 ) ( x ) = ( 1 ) n ( β , β 1 ) + 1 g 2 ( β 1 ) ( x ) > 0 for x ( b ϵ , b ) . Applying properties properties 1, 5 and 6 of Lemma 1, one has
( 1 ) n ( β , j ) g 2 ( j ) ( b ) > 0 , 0 j K ( α , β ) , j β .
Since the induction hypothesis on b implies
( 1 ) n ( β , j ) j G ( α , β , b , t ) x j > 0 , 0 j n 1 , j β ,
from (44) and (45) we get to
( 1 ) n ( β , j ) j G ( α , β , b , t ) x j > 0 , 0 j K ( α , β ) , j β ,
or rather
( 1 ) n ( β , j ) j G ( α , β , b , t ) x j > 0 , 0 j max ( K ( α , β ) , α k + 1 ) , j β ,
if we consider (32). The extension of (47) to (22) is straightforward since if max ( K ( α , β ) , α k + 1 ) < n 1 then { max ( K ( α , β ) , α k + 1 ) + 1 , , n 1 } β .
Let us move on to prove (21) for j > α i . For that let us consider the boundary conditions ( α ^ , β ^ ) , defined by α ^ = α { α i } (or in another way, α ^ = α { α k + 1 } ) , α ^ Ω k + 1 and β ^ = β \ β n k , β ^ Ω n k 1 . ( α ^ , β ^ ) are admissible since:
  • If β n k α i , the property is straightforward as ( α , β ) are also admissible.
  • If β n k < α i , then (reusing the nomenclature of Lemma 1) one has l j + 1 ( α ^ , β ^ ) + r j + 1 ( α ^ , β ^ ) = n for j = α k + 1 , , n 1 and in particular l α k + 2 ( α ^ , β ^ ) + r α k + 2 ( α ^ , β ^ ) = n which in turn implies (note α k + 1 < n )
    l j + 1 ( α ^ , β ^ ) + r j + 1 ( α ^ , β ^ ) n ( α k + 1 j ) = n α k + j 1 j + 1 ,
    for β n k j α k + 1 . As there is no change in the boundary conditions associated to derivatives of order lower than β n k , this proves the admissibility of ( α ^ , β ^ ) .
Thus, let us define the function g 3 ( x ) = G ( α , β , x , t ) G ( α ^ , β ^ , x , t ) , which is n-times continuously differentiable and satisfies
L g 3 = 0 , x ( a , b ) ;
g 3 ( α j ) ( a ) = 0 , α j α , α j α i ; g 3 ( β j ) ( b ) = 0 , β j β \ β n k ;
g 3 ( α i ) ( a ) = α i G ( α , β , a , t ) x α i ; g 3 ( β n k ) ( b ) = β n k G ( α ^ , β ^ , b , t ) x β n k .
From (38) and (48) it follows
( 1 ) m ( α , α i ) g 3 ( α i ) ( a ) > 0 .
The boundary conditions for g 3 are ( α , β ^ ) . We will prove now that
K ( α , β ^ ) = max ( K ( α , β ) , β n k ) ,
and that ( α , β ^ ) satisfy the condition (17) of Lemma 1 for 1 , , K ( α , β ^ ) .
If K ( α , β ) > β n k then K ( α , β ) = K ( α , β ^ ) as the only difference between ( α , β ^ ) and ( α , β ) is precisely β n k . In that case we can follow a similar reasoning as before to state
c a r d { α l α , j < α l n 1 } n 1 j 1 = n j 2 ,
for β n k j < K ( α , β ) , so, using again the nomenclature of Lemma 1 for ( α , β )
l j + 1 ( α , β ) + r j + 1 ( α , β ) n ( n j 2 ) = j + 2 , β n k j < K ( α , β ) = K ( α , β ^ ) .
That in turn implies
l j + 1 ( α , β ^ ) + r j + 1 ( α , β ^ ) j + 1 , β n k j < K ( α , β ^ ) ,
or
l j ( α , β ^ ) + r j ( α , β ^ ) j , β n k + 1 j K ( α , β ^ ) .
Since
l j ( α , β ^ ) + r j ( α , β ^ ) = l j ( α , β ) + r j ( α , β ) j , j β n k ,
from (51) and (52) it follows that ( α , β ^ ) satisfy the condition (17) for 1 , , K ( α , β ^ ) when K ( α , β ^ ) = K ( α , β ) .
On the other hand, if K ( α , β ) < β n k , since ( α , β ) are admissible, there cannot be an order above K ( α , β ) which belongs to α and β at the same time, which implies that the number of boundary conditions above K ( α , β ) is limited by
c a r d { α l α , K ( α , β ) + 1 α l n 1 } + c a r d { β l β , K ( α , β ) + 1 β l n 1 } = n 1 K ( α , β ) ,
and therefore
c a r d { α l α , K ( α , β ) + 1 α l n 1 } + c a r d { β l β ^ , K ( α , β ) + 1 β l n 1 } = n 2 K ( α , β ) .
This means that there must exist an index l with K ( α , β ) + 1 l n 1 such that l α β ^ . That index l must obviously be K ( α , β ^ ) . As the only difference between ( α , β ) and ( α , β ^ ) is precisely β n k , it follows that K ( α , β ) ^ = β n k . The admissibility of ( α , β ) grants that ( α , β ^ ) fulfils the condition (17) of Lemma 1 for 1 , , K ( α , β ^ ) , also in this case K ( α , β ) ^ = β n k .
Since K ( α , β ) ^ β n k , in all cases where K ( α , β ^ ) α i , j G ( α , β , a , t ) x j = 0 for j = α i + 1 , , n 1 , eliminating the need for proving (21) in these scenarios. In the rest of the cases we can apply properties 3 and 4 of Lemma 1 to g 3 and (49) to yield
( 1 ) m ( α , j ) g 3 ( j ) ( a ) > 0 , α i < j K ( α , β ^ ) , j α .
Due to the definition of β ^ , we can apply in this case induction over S ( β ) and assume
( 1 ) m ( α ^ , j ) j G ( α ^ , β ^ , a , t ) x j > 0 , α i < j n 1 , j α .
From (53) and (54), and the fact that m ( α ^ , j ) = m ( α , j ) for α i < j n 1 , one finally gets to
( 1 ) m ( α , j ) j G ( α , β , a , t ) x j = ( 1 ) m ( α , j ) g 3 ( j ) ( a ) + ( 1 ) m ( α , j ) j G ( α ^ , β ^ , a , t ) x j
= ( 1 ) m ( α , j ) g 3 ( j ) ( a ) + ( 1 ) m ( α ^ , j ) j G ( α ^ , β ^ , a , t ) x j > 0 , α i < j K ( α , β ^ ) , j α ,
or, taking (50) into account
( 1 ) m ( α , j ) j G ( α , β , a , t ) x j > 0 , α i < j max ( K ( α , β ) , β n k ) , j α .
The extension of (56) to (21) is straightforward as if max ( K ( α , β ) , β n k ) < n 1 then j G ( α , β , a , t ) x j = 0 for j = max ( K ( α , β ) , β n k ) + 1 , , n 1 . This completes the proof of the case α 1 = 0 .
Let us focus now on the case α 1 > 0 , β 1 = 0 . For that we will consider the function
G ( β , α , x , t ) = ( 1 ) n G ( α , β , b + a x , b + a t ) ,
which as one can readily show (see e.g., [8], Chapter 3, page 105) is the Green function of the problem
L G = 0 , ( x , t ) { ( a , t ) ( t , b ) } × ( a , b ) ,
β j G ( β , α , a , t ) x β j = 0 , j = 1 , , n k ; α j G ( β , α , b , t ) x α j = 0 , j = 1 , , k ;
with L defined as
L y = y ( n ) ( x ) a n 1 ( b + a x ) y ( n 1 ) ( x ) + + ( 1 ) n a 0 ( b + a x ) y ( x ) .
Since β 1 = 0 is a boundary condition applied at a, G satisfies the hypotheses of the first part of this theorem. Thus, from (21), (22) and (23) one gets to
( 1 ) m ( β , i ) i G ( β , α , a , t ) x i > 0 , 0 i n 1 , i β ,
( 1 ) n ( α , i ) i G ( β , α , b , t ) x i > 0 , 0 i n 1 , i α ,
and
( 1 ) k i G ( β , α , x , t ) x i > 0 , 0 i α 1 , x ( a , b ) .
(60), (61), (62) and the relationship
( 1 ) n j j G ( β , α , b + a x , b + a t ) x j = j G ( α , β , x , t ) x j , 0 j n 1 ,
finally yield
( 1 ) n ( α , i ) + n i i G ( α , β , a , t ) x i > 0 , 0 i n 1 , i α ,
( 1 ) m ( β , i ) + n i i G ( α , β , b , t ) x i > 0 , 0 i n 1 , i β ,
( 1 ) n k + i i G ( α , β , x , t ) x i > 0 , 0 i α 1 , x ( a , b ) .
As n i n ( α , i ) = m ( α , i ) for i α and n i m ( β , i ) = n ( β , i ) for i β , from (64) and (65) one readily gets (21) and (22), respectively. □
Remark 2. 
Inequalities (21) and (22) are results new with respect to Theorems 3.3 and 3.4 of [1]. Likewise, (30) is also new with respect to Theorem 2.1 of [1].
Next, we will assess the dependence of G ( x , t ) and some of its partial derivatives with regards to the extremes a and b.
Lemma 2. 
Fixed t [ a , b ] , H ( x , t ) is the solution of the problem
L H = 0 , x ( a , b ) ;
α j H ( a , t ) x α j = 0 , α j α ; β j H ( b , t ) x β j = β j + 1 G ( α , β , b , t ) x β j + 1 , β j β .
Likewise, I ( x , t ) is the solution of the problem
L I = 0 , x ( a , b ) ;
α j I ( a , t ) x α j = α j + 1 G ( α , β , a , t ) x α j + 1 , α j α ; β j I ( b , t ) x β j = 0 , β j β .
Proof. 
The proof of (67) follows the same steps as that of [13], Lemma 3.3 with x 1 = a and k = 1 and will not be repeated. The proof of (68) is also similar. □
Theorem 4. 
Let us assume that ( α , β ) are admissible boundary conditions. If α 1 = 0 and either
β n k < n 1
or
β n k = n 1 a n d ( 1 ) n ( β , j ) a j ( b ) 0 , 0 j n 1 , j β ,
with at least one l β such that 0 l n 1 and
( 1 ) n ( β , l ) a l ( b ) < 0 ,
then
( 1 ) n ( β , j ) j H ( x , t ) x j < 0 , ( x , t ) ( a , b ) × ( a , b ) , β 1 j β B ,
and
( 1 ) n k j H ( x , t ) x j > 0 , ( x , t ) ( a , b ) × ( a , b ) , 0 j < β 1 .
If α 1 > 0 and either
β n k < n 1
or
β n k = n 1 a n d ( 1 ) n ( β , j ) a j ( b ) 0 , 0 j n 1 , j β ,
then
( 1 ) n k j j H ( x , t ) x j > 0 , ( x , t ) ( a , b ) × ( a , b ) , 0 j β B .
Proof. 
Let us suppose that α 1 = 0 . Fixed t [ a , b ] , from Lemma 2 we know that H ( x , t ) = i = 1 n k h β i ( x , t ) , where h β i ( x , t ) is the solution of
L h β i = 0 , x ( a , b ) ; α j h β i ( a , t ) x α j = 0 , α j α ;
β j h β i ( b , t ) x β j = 0 , β j β \ β i ; β i h β i ( b , t ) x β i = β i + 1 G ( b , t ) x β i + 1 .
Note that if β i + 1 β then h β i ( x , t ) 0 due to the disfocality of L on [ a , b ] . That implies that we only need to take into account those β i such that β i + 1 β .
If β n k < n 1 then β i < n 1 for 1 i n k and we can apply (22) and (75) to obtain
( 1 ) n ( β , β i + 1 ) β i h β i ( b , t ) x β i < 0 ,
which combined with the properties properties 2 (as commented at the end of the Introduction the homogeneous boundary conditions in (75) are at least ( β B 1 ) -alternate), 5 and 6 of Lemma 1, and the fact that n ( β , β i + 1 ) = n ( β , β i ) when β i + 1 β , yields
( 1 ) n ( β , j ) j h β i ( x , t ) x j < 0 , ( x , t ) ( a , b ) × ( a , b ) , j β , j β B ,
and
( 1 ) n ( β , j ) j h β i ( x , t ) x j > 0 , ( x , t ) ( a , b ) × ( a , b ) , j β , j β B .
As j h β i ( b , t ) x j = 0 for β 1 j β B and j h β i ( b , t ) x j 0 for 0 j < β 1 , from (77) and (78), the facts that β B β i and n ( β , j ) = n k for j < β 1 , and the decomposition of H ( x , t ) in h β i ( x , t ) , one gets to (71) and (72).
On the contrary, if β n k = n 1 then (77) and (78) hold for all h β i but for h β n k , since in that case the sign of β n k h β n k ( b , t ) x β n k is the opposite of that of n G ( b , t ) x n , which Theorem 3 does not yield. In that case we need to revert to the definition of L. Thus, from (1) and (4) one has
n G ( b , t ) x n = l = 0 n 1 a l ( b ) l G ( b , t ) x l = l = 0 , l β n 1 a l ( b ) l G ( b , t ) x l .
From (22), (69), (70), (75) and (79) one gets to n G ( b , t ) x n > 0 and n 1 h β n k ( b , t ) x n 1 < 0 . Applying properties Properties 2, 5 and 6 of Lemma 1 one obtains again (77) and (78), and taking into account the decomposition of H ( x , t ) in h β i ( x , t ) one finally gets (71) and (72).
The proof of (74) in the case α 1 > 0 can be done following the same reasoning. □
Remark 3. 
Condition (70) can be removed if β { k , k + 1 , , n 1 } . Such a condition is needed in the case β = { k , k + 1 , , n 1 } to grant n 1 h β n k ( b , t ) x n 1 < 0 , since n 1 h β n k ( b , t ) x n 1 = 0 implies H ( x , t ) = h β n k ( x , t ) 0 by the disfocality of L on [ a , b ] . However, if β { k , k + 1 , , n 1 } ) then there are other non-trivial h β i ( x , t ) which guarantee the non-triviality of H ( x , t ) .
Corollary 1. 
Let b 1 < b 2 . Under the conditions of Theorem 4, if α 1 = 0 then
( 1 ) n ( β , j ) j G a , b 2 ( x , t ) x j < ( 1 ) n ( β , j ) j G a , b 1 ( x , t ) x j , ( x , t ) ( a , b ) × ( a , b ) , β 1 j β B ,
and
( 1 ) n k j G a , b 2 ( x , t ) x j > ( 1 ) n k j G a , b 1 ( x , t ) x j > 0 , ( x , t ) ( a , b ) × ( a , b ) , 0 j β 1 .
If α 1 > 0 then
( 1 ) n k j j G a , b 2 ( x , t ) x j > ( 1 ) n k j j G a , b 1 ( x , t ) x j > 0 , ( x , t ) ( a , b ) × ( a , b ) , 0 j β B .
Proof. 
The proof is immediate from Theorem 4. □
Theorem 5. 
Let us assume that ( α , β ) are admissible boundary conditions.
If α 1 = 0 and either
α k < n 1
or
α k = n 1 a n d ( 1 ) m ( α , j ) a j ( a ) 0 , 0 j n 1 , j α ,
then
( 1 ) n k j I ( x , t ) x j < 0 , ( x , t ) ( a , b ) × ( a , b ) , 0 j α A .
If α 1 > 0 and either
α k < n 1
or
α k = n 1 a n d ( 1 ) m ( α , j ) a j ( a ) 0 , 0 j n 1 , j α ,
with at least one l α such that 0 l n 1 and
( 1 ) m ( α , l ) a l ( a ) < 0 ,
then
( 1 ) m ( α , j ) j I ( x , t ) x j < 0 , ( x , t ) ( a , b ) × ( a , b ) , α 1 j α A .
and
( 1 ) n k j j I ( x , t ) x j < 0 , ( x , t ) ( a , b ) × ( a , b ) , 0 j < α 1 .
Proof. 
The proof is similar to that of Theorem 4. □
Remark 4. 
As before, condition (86) can be removed if α { n k , n k + 1 , , n 1 } .
Corollary 2. 
Let a 2 < a 1 . Under the conditions of Theorem 5, if α 1 = 0 then
( 1 ) n k j G a 2 , b ( x , t ) x j > ( 1 ) n k j G a 1 , b ( x , t ) x j > 0 , ( x , t ) ( a 1 , b ) × ( a 1 , b ) , 0 j α A .
If α 1 > 0 then
( 1 ) m ( α , j ) j G a 2 , b ( x , t ) x j > ( 1 ) m ( α , j ) j G a 1 , b ( x , t ) x j , ( x , t ) ( a 1 , b ) × ( a 1 , b ) , α 1 j α A .
and
( 1 ) n k j j G a 2 , b ( x , t ) x j > ( 1 ) n k j j G a 1 , b ( x , t ) x j > 0 , ( x , t ) ( a 1 , b ) × ( a 1 , b ) , 0 j < α 1 .
Remark 5. 
If α 1 = 0 , it can happen that α A β 1 (more concretely α A = β 1 1 ). In that case the statement (i) of [1], Theorem 3.4 (see (14)) does not seem to be valid for l = β 1 and b 1 = b 2 , unless an approach not based on the sign of I and its derivatives was used by the authors to prove that assertion. The lack of an explicit proof of that theorem complicates any further analysis, but one cannot help having the impression that the statement is incorrect. The same comment is applicable to the statement (ii) of [1], Theorem 3.4 in the case α 1 > 0 , a 1 = a 2 (see (15)), which seems only valid for l = 0 , , β B and not for l = α 1 when β B α 1 .

2.2. The Case of p-Alternate Boundary Conditions

When the boundary conditions are p-alternate, the lack of simultaneous boundary conditions at a and b for any derivative lower than p suggests no need for the immediately higher derivative to change the sign on ( a , b ) , at least as a consequence of Rolle’s theorem. The following theorem shows that this is to some extent the case under certain hypotheses.
Theorem 6. 
Let us assume that ( α , β ) are p-alternate admissible boundary conditions.
If α 1 = 0 and either
β n k < n 1
or
β n k = n 1 a n d ( 1 ) n ( β , j ) a j ( x ) 0 , x [ a , b ] , 0 j n 1 , j β ,
with at least one l β such that 0 l n 1 and
( 1 ) n ( β , l ) a l ( x ) < 0 , x [ a , b ] ,
then
( 1 ) n k j G ( α , β , x , t ) x j > 0 , ( x , t ) ( a , b ) × ( a , b ) , 0 j β 1 ,
( 1 ) n ( β , j ) j G ( α , β , x , t ) x j < 0 , ( x , t ) ( a , b ) × ( a , b ) , β 1 j β B ,
and, if β n k , p > β B ,
( 1 ) n ( β , β B + 1 ) β B + 1 G ( α , β , x , t ) x β B + 1 > 0 , ( x , t ) ( a , b ) × ( a , b ) .
If α 1 > 0 and either
α k < n 1
or
α k = n 1 a n d ( 1 ) m ( α , j ) a j ( x ) 0 , x [ a , b ] , 0 j n 1 , j α ,
with at least one l α such that 0 l n 1 and
( 1 ) m ( α , l ) a l ( x ) < 0 , x [ a , b ] ,
then
( 1 ) n + k j j G ( α , β , x , t ) x j > 0 , ( x , t ) ( a , b ) × ( a , b ) , 0 j α 1 ,
( 1 ) m ( α , j ) j G ( α , β , x , t ) x j > 0 , ( x , t ) ( a , b ) × ( a , b ) , α 1 j α A ,
and, if α k , p > α A ,
( 1 ) m ( α , α A + 1 ) α A + 1 G ( α , β , x , t ) x α A + 1 > 0 , ( x , t ) ( a , b ) × ( a , b ) .
Proof. 
Let us tackle the case α 1 = 0 first. From Theorem 3, concretely (23), we already know that (94) holds for 0 j β 1 (note that n ( β , β 1 ) = n k 1 ).
Next, let us assume that x > t . From the definition of H one has
j G a , b ( α , β , x , t ) x j = j G a , x ( α , β , x , t ) x j + x b s j G a , s ( α , β , x , t ) x j d s
= j G a , x ( α , β , x , t ) x j + x b j H a , s ( α , β , x , t ) x j d s , ( x , t ) ( a , b ) × ( a , b ) .
G a , x ( α , β , x , t ) is the Green function of the problem (4) when b = x , so it satisfies the boundary conditions related to β at x, that is
j G a , x ( α , β , x , t ) x j = 0 , t ( a , x ) , j β .
On the other hand, from the hypotheses and Theorem 4 it follows that
( 1 ) n ( β , j ) j H a , s ( α , β , x , t ) x j < 0 , ( x , t ) ( a , s ) × ( a , s ) , t < x s b , β 1 j β B .
From (102), (103) and (104) one finally gets (95) for x > t and β 1 j β B .
Let us focus now on the case x t . As before one has
j G a , b ( α , β , x , t ) x j = j G a , t ( α , β , x , t ) x j + t b j H a , s ( α , β , x , t ) x j d s , ( x , t ) ( a , b ) × ( a , b ) .
G a , t ( α , β , x , t ) is the Green function of the problem (4) when b = t , so it satisfies the boundary conditions related to β at t, that is
j G a , t ( α , β , t , t ) x j = 0 , t ( a , b ) , j β .
If n 1 β , G a , t ( α , β , x , t ) is n-times continuously differentiable in ( a , t ) , satisfies L G a , t ( α , β , x , t ) = 0 for x ( a , t ) and n homogeneous boundary conditions at a and b. Since L is disfocal on [ a , b ] , it is also disfocal on [ a , t ) and therefore G a , t ( α , β , x , t ) 0 for x [ a , t ) . From here, (104) and (105) one gets (95). On the contrary, if n 1 β , from the properties of the Green function (see [8], Chapter 3, page 105, property (ii))) it is straightforward to show that G a , t ( α , β , x , t ) is n-times continuously differentiable on ( a , t ) , satisfies L G a , t ( α , β , x , t ) = 0 for x ( a , t ) , n 1 homogeneous boundary conditions at a and b and the boundary condition
lim x t n 1 G a , t ( α , β , x , t ) x n 1 = 1 , t ( a , b ) .
As noted in the Introduction, p β B 1 . We can apply Properties 2, 5 and 6 of Lemma 1 to (107), as well as the definition of n ( β , j ) , to yield
( 1 ) n ( β , j ) j G a , t ( α , β , x , t ) x j < 0 , x ( a , t ) , t ( a , b ) , j β , j p + 1 ,
and
( 1 ) n ( β , j ) j G a , t ( α , β , x , t ) x j > 0 , x ( a , t ) , t ( a , b ) , j β , j p + 1 .
From (104), (105) and (108) one gets (95) for the case x t .
To address (96), let us note that if both β n k , p > β B then β B α , β B + 1 α and β B + 1 β due to the definition of β B and the p-alternate property of the boundary conditions ( α , β ) . In that case we can define the boundary conditions ( α , β ˇ ) by adding β B + 1 and removing β n k to/from β , that is β ˇ = { β \ β n k } ( β B + 1 ). Then, fixed t [ a , b ] , the function g 4 ( x ) = G ( α , β ˇ , x , t ) G ( α , β , x , t ) is n times continuously differentiable on [ a , b ] and satisfies
L g 4 = 0 , x ( a , b ) ;
g 4 ( α j ) ( a ) = 0 , α j α ; g 4 ( β j ) ( b ) = 0 , β j β \ β n k ;
g 4 ( β B + 1 ) ( b ) = β B + 1 G ( α , β , b , t ) x β B + 1 .
From (22) and (110) it follows that
( 1 ) n ( β , β B + 1 ) g 4 ( β B + 1 ) ( b ) < 0 .
Applying property 2 of Lemma 1 to (111) (note that p β B + 1 ) one has
( 1 ) n ( β , β B + 1 ) g 4 ( β B + 1 ) ( x ) < 0 , x ( a , b ) .
Likewise, applying (95) to G ( α , β ˇ , x , t ) (note that β ˇ n k < n 1 ) one has
( 1 ) n ( β ˇ , β B + 1 ) β B + 1 G ( α , β ˇ , x , t ) x β B + 1 < 0 , x ( a , b ) ,
which is also
( 1 ) n ( β , β B + 1 ) β B + 1 G ( α , β ˇ , x , t ) x β B + 1 > 0 , x ( a , b ) .
Combining (112) and (114) one finally gets to
( 1 ) n ( β , β B + 1 ) β B + 1 G ( α , β , x , t ) x β B + 1
= ( 1 ) n ( β , β B + 1 ) β B + 1 G ( α , β ˇ , x , t ) x β B + 1 ( 1 ) n ( β , β B + 1 ) g 4 ( β B + 1 ) ( x ) > 0 , x ( a , b ) ,
which is (96).
The proof of (99)–(101) can be done using the same auxiliar Green function G ( β , α , x , t ) of (57), applying (63) to (94)–(96) and taking into account that n ( α , j ) + m ( α , j ) = n j 1 when j α . □

2.3. The Strongly Admissible Case

Last, but not least, we will prove a result on the strongly admissible case, extending the order of the partial derivatives of G ( x , t ) for which the sign is constant in ( a , b ) up to the ( n 1 ) -th order.
Theorem 7. 
Let us assume that ( α , β ) are strongly admissible boundary conditions and that
( 1 ) m ( α , j ) a j ( x ) 0 , x [ a , b ] , 0 j n 1 .
If n 1 α let us assume that there exists at least one l α α such that
( 1 ) m ( α , l α ) a l α ( a ) < 0 .
If n 1 β let us assume that there exists at least one l β β such that
( 1 ) m ( α , l β ) a l β ( b ) < 0 .
If either of the following two conditions holds
1. 
α 1 = 0 and either { β B + 1 , n 1 } α or { β B + 2 , n 1 } β ,
2. 
α 1 > 0 and either { α A + 2 , n 1 } α or { α A + 1 , n 1 } β ,
then
( 1 ) m ( α , j ) j G ( x , t ) x j > 0 , ( x , t ) ( a , b ) × ( a , b ) , 0 j n 1 .
Proof. 
The key of this theorem is to prove that, fixed t [ a , b ] , n G ( x , t ) x n 0 for x ( a , b ) . This, added to the property of the Green functions (see [8], Chapter 3, page 105) that states that
lim x t + n 1 G ( x , t ) x n 1 = 1 + lim x t n 1 G ( x , t ) x n 1 ,
and the presence of one homogeneous boundary condition in n 1 G ( x , t ) x n 1 at either a or b, guarantees that n 1 G ( x , t ) x n 1 does not change sign on x ( a , b ) . The same absence of change of the sign of the partial derivatives of G ( x , t ) of lower orders follows immediately from this fact and the strong admissibility of the homogeneous boundary conditions.
To prove the non-negative sign of n G ( x , t ) x n on ( a , b ) for fixed t [ a , b ] , let us focus first on its value at the extremes a and b. Thus, from the definition of L one has
n G ( a , t ) x n = l = 0 n 1 a l ( a ) l G ( a , t ) x l = l = 0 , l α n 1 a l ( a ) l G ( a , t ) x l ,
and
n G ( b , t ) x n = l = 0 n 1 a l ( b ) l G ( b , t ) x l = l = 0 , l β n 1 a l ( b ) l G ( b , t ) x l .
From Theorem 3 and the hypotheses (116), (117), it is straightforward to show that n G ( a , t ) x n > 0 if n 1 α and n G ( a , t ) x n 0 else. As for n G ( b , t ) x n , if l β , then l α and the strong admissibility forces that m ( α , l ) = n ( β , l ) . From here, Theorem 3 and the hypotheses (116), (118), again, one gets that n G ( b , t ) x n > 0 if n 1 β and n G ( b , t ) x n 0 otherwise.
Next, let us do a similar comparison for the partial derivatives of lower order. If n 1 α , from Taylor’s theorem there must be a δ > 0 such that
n 1 G ( x , t ) x n 1 > 0 , x ( a , a + δ ) .
Applying Taylor’s theorem recursively and taking into account (21) one proves that there exists a δ 1 > 0 such that
( 1 ) m ( α , i ) i G ( α , β , x , t ) x i > 0 , x ( a , a + δ 1 ) , 0 i n 1 .
As for b, (22) already gives
( 1 ) n ( β , n 1 ) n 1 G ( b , t ) x n 1 = n 1 G ( b , t ) x n 1 > 0 .
Applying again Taylor’s theorem recursively and taking into account (22) one has that there must be a δ 2 > 0 such that
( 1 ) n ( β , i ) i G ( α , β , x , t ) x i > 0 , x ( b δ 2 , b ] , i β , 0 i n 1 ,
and
( 1 ) n ( β , i ) i G ( α , β , x , t ) x i < 0 , x ( b δ 2 , b ) , i β , 0 i n 1 .
From (123) and (125) it is clear that n 1 G ( x , t ) x n 1 has the same (positive, in this case) sign on x ( a , a + δ 1 ) ( b δ 2 , b ] . We can prove by induction that this same sign property is valid for all partial derivatives of lower order, namely, that the signs given by (124), (126) and (127) are the same for each partial derivative. Thus, let us suppose that the sign of the partial derivative of order l + 1 is the same in the neighborhoods of a and b, and is given by (124). If l β , then by Taylor’s theorem, the sign of the derivative of order l must be the opposite of the sign of the derivative of order l + 1 in the neighborhood of b. Likewise, m ( α , l ) = m ( α , l + 1 ) + 1 , so from (124) the sign of the derivative of order l must also be the opposite of the sign of the derivative of order l + 1 in the neighborhood of a. Therefore, the sign of the partial derivatives of order l must coincide at the proximity of a and b. Likewise, if l α then by Taylor’s theorem the sign of the derivative of order l must be the same as the sign of the derivative of order l + 1 in the neighborhood of a, whereas the sign of the derivative of order l at b is given by ( 1 ) n ( β , l ) . If l + 1 β then from (126) and since n ( β , l ) = n ( β , l + 1 ) the sign of the derivative of order l + 1 at b must also coincide with that of the derivative of order l at b. If l + 1 β then n ( β , l ) = n ( β , l + 1 ) + 1 , so from (127) the sign of the derivative of order l + 1 at b must also coincide with that of the derivative of order l at b. That means, again, that the signs of the partial derivatives of G ( x , t ) of order l must also coincide at the neighborhoods of a and b.
A similar reasoning can be done for the case n 1 β , leading to the same conclusions.
Once we have that the signs of the partial derivatives of G ( x , t ) on the vicinity of a and b are the same, regardless of the order, and knowing already from Theorem 6 (note that the strongly admissible conditions are ( n 1 ) -alternate) that the sign of i G ( x , t ) x i is constant on ( a , b ) for 0 i β B (case α 1 = 0 , β n k = β B ), 0 i β B + 1 (case α 1 = 0 , β n k > β B ), 0 i α A (case α 1 > 0 , α k = α A ) or 0 i α A + 1 (case α 1 > 0 , α k > α A ), and determined by (124) in all cases (it is straightforward to check), it remains to prove that the sign of i G ( x , t ) x i is constant on ( a , b ) for the rest of values of i up to n 1 . We will do it by reduction to the absurd. Thus, let us suppose that there is an order l for which l G ( x , t ) x l changes sign on ( a , b ) . Since the sign at the vicinity of the extremes is the same, there must be at least an even number of sign changes on ( a , b ) . Let us call x 1 , l the minimum of these points and x 2 , l the maximum of these points. Clearly the sign of l G ( x , t ) x l must be the same for x ( a , x 1 , l ) and x ( x 2 , l , b ) , and be given by (124).
Let us assume that { l , , n 1 } α . Then by Rolle’s Theorem we can obtain a sequence of zeroes x 1 , j , j = l , , n 1 , such that x 1 , l > x 1 , l + 1 > > x 1 , n 2 > a , for which the sign of j G ( x , t ) x j is constant on ( a , x 1 , j ) , and again given by (124). Since n 1 G ( x , t ) x n 1 has a discontinuity at x = t , there must be a smallest point x 1 , n 1 < x 1 , n 2 where there is a change of sign of n 1 G ( x , t ) x n 1 from positive (see (124)) to negative, but from (120) it is clear that such a point cannot be x 1 , n 1 = t , so it must be a zero of n 1 G ( x , t ) x n 1 . From the mean value theorem there must exist an x * ( a , x 1 , n 1 ) such that n G ( x * , t ) x n < 0 . However, the above reasoning implies that the sign of all partial derivatives of orders from l to n 1 is given by (124) for x ( a , x 1 , n 1 ) , and from (116), that also means that the sign of n G ( x , t ) x n must be non-negative for all x ( a , x 1 , n 1 ) , which is a contradiction.
A similar argument can be used if { l , , n 1 } β and if α 1 > 0 , which completes the proof. □
Remark 6. 
If a l ( a ) = 0 for all j α , then the hypothesis (117) of the Theorem 7 can be replaced by any combination of a l ( x ) that grants n G ( x , t ) x n > 0 for x ( a , a + δ ) . Likewise, if a l ( b ) = 0 for all j β , then the hypothesis (118) of the Theorem 7 can be replaced by any combination of a l ( x ) that grants n G ( x , t ) x n > 0 for x ( b δ , b ) .
Remark 7. 
One cannot help wondering if, with the right combinations of signs of a l ( x ) in [ a , b ] , it is possible to guarantee the conservation of sign of each partial derivative of G with respect to x in [ a , b ] regardless of how α j and β j alternate in the case of strongly admissible conditions (that is, without imposing Conditions 1 and 2 in Theorem 7). Even though that assertion looks quite plausible, its proof has been elusive to the authors so far.

3. Discussion

The results presented in this paper provide information about the sign and dependence on the extremes a and b of the Green function of the problem (4) and its derivatives when the two-point boundary conditions are admissible, property which encompasses many types of boundary conditions usually covered in the literature (for instance, conjugate or focal boundary conditions). By doing so, this paper extends (and to a small degree corrects, as discussed in the Remark 5) the results of Eloe and Ridenhour in [1], a fine piece of Green function theory that is considered a reference in the subject. The paper goes beyond to address the p-alternate and strongly admissible cases, for which results on the signs of higher derivatives on the interval are provided. Thus, whilst both [1] and the Section 2.1 yield sign results only for derivatives up to max ( α 1 , β 1 ) -th order, in the case of p-alternate they are supplied for derivatives up to α A + 1 (if α 1 > 0 ) and β B + 1 (if α 1 = 0 ) orders, and in the case of strongly admissible conditions, for derivatives up to ( n 1 ) -th order. As stated in the Introduction, this is relevant since the maximum value of the integer μ of the problem (6) which allows a cone-based approach is limited by the order of the highest derivative of G ( x , t ) with constant sign, so that finding results for higher derivatives of G ( x , t ) permits increasing the applicability of the cone theory to such problems.
One question that is left open is whether it is possible to find conditions on the sign of the coefficients of L which grant a constant sign of every derivative of G ( x , t ) on ( a , b ) up to the ( n 1 ) -th order, for any strongly admissible boundary conditions. We hypothesize an affirmative response, but a proper proof is still pending.
To conclude, other areas that can benefit from an extension of these sign findings are those of boundary conditions mixing different derivatives or those with integral conditions. The determination of the sign of the Green function of fractional boundary value problems is also a topic that has raised interest recently, as part of more sophisticated mechanisms to find solutions of other related non-linear fractional boundary value problems (see for instance [23,24,25,26]). However, there is a lot to do in this area, since most of these cases require the explicit calculation of the associated Green function, and this calculation is only possible in the simplest ones. A more generic approach that provided signs without having to solve fractional differential equations, similar to that presented here, would, therefore, be very welcome.

Author Contributions

Conceptualization, P.A.B.; methodology, P.A.B. and L.J.; investigation, P.A.B.; validation, P.A.B. and L.J.; writing—original draft preparation, P.A.B.; writing—review and editing, L.J.; visualization, P.A.B. and L.J.; supervision, P.A.B.; project administration, L.J.; funding acquisition, L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by the Spanish Ministerio de Economía, Industria y Competitividad (MINECO), the Agencia Estatal de Investigación (AEI) and Fondo Europeo de Desarrollo Regional (FEDER UE) grant MTM2017-89664-P.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eloe, P.W.; Ridenhour, J. Sign properties of Green’s functions for a family of two-point boundary value problems. Proc. Am. Math. Soc. 1994, 120, 443–452. [Google Scholar]
  2. Butler, G.; Erbe, L. Integral comparison theorems and extremal points for linear differential equations. J. Diff. Equ. 1983, 47, 214–226. [Google Scholar] [CrossRef] [Green Version]
  3. Peterson, A. Green’s functions for focal type boundary value problems. Rocky Mountain J. Math. 1979, 9, 721–732. [Google Scholar] [CrossRef]
  4. Peterson, A. Focal Green’s functions for fourth-order differential equations. J. Math. Anal. Appl. 1980, 75, 602–610. [Google Scholar] [CrossRef] [Green Version]
  5. Elias, U. Green’s functions for a nondisconjugate differential operator. J. Diff. Equ. 1980, 37, 319–350. [Google Scholar] [CrossRef] [Green Version]
  6. Peterson, A.; Ridenhour, J. Comparison theorems for Green’s functions for focal boundary value problems. In World Scientific Series in Applicable Analysis; Recent Trends in Differential Equations; Agarwal, R.P., Ed.; World Scientific Publishing Co. Pte. Ltd.: Singapore, 1992; Volume 1, pp. 493–506. [Google Scholar]
  7. Nehari, Z. Disconjugate linear differential operators. Trans. Am. Math. Soc. 1967, 129, 500–516. [Google Scholar] [CrossRef]
  8. Coppel, W. Disconjugacy; Springer: Berlin, Germany, 1971. [Google Scholar]
  9. Krein, M.G.; Rutman, M.A. Linear Operators Leaving a Cone Invariant in a Banach Space; American Mathematical Society Translation Series 1; Cañada, A., Drábek, P., Fonda, A., Eds.; American Mathematical Society: Providence, RI, USA, 1962; Volume 10, pp. 199–325. [Google Scholar]
  10. Krasnosel’skii, M.A. Positive Solutions of Operator Equations; P. Noordhoff Ltd.: Groningen, The Netherlands, 1964. [Google Scholar]
  11. Keener, M.S.; Travis, C.C. Positive cones and focal points for a class of nth order differential equations. Trans. Am. Math. Soc. 1978, 237, 331–351. [Google Scholar] [CrossRef]
  12. Schmitt, K.; Smith, H.L. Positive solutions and conjugate points for systems of differential equations. Nonlinear Anal. Theory Methods Appl. 1978, 2, 93–105. [Google Scholar] [CrossRef]
  13. Eloe, P.W.; Hankerson, D.; Henderson, J. Positive solutions and conjugate points for multipoint boundary value problems. J. Diff. Equ. 1992, 95, 20–32. [Google Scholar] [CrossRef] [Green Version]
  14. Eloe, P.W.; Henderson, J. Focal point characterizations and comparisons for right focal differential operators. J. Math. Anal. Appl. 1994, 181, 22–34. [Google Scholar] [CrossRef] [Green Version]
  15. Almenar, P.; Jódar, L. Solvability of N-th order boundary value problems. Int. J. Diff. Equ. 2015, 2015, 1–19. [Google Scholar] [CrossRef] [Green Version]
  16. Almenar, P.; Jódar, L. Improving results on solvability of a class of n-th order linear boundary value problems. Int. J. Diff. Equ. 2016, 2016, 1–10. [Google Scholar] [CrossRef]
  17. Almenar, P.; Jódar, L. Solvability of a class of n-th order linear focal problems. Math. Modell. Anal. 2017, 22, 528–547. [Google Scholar] [CrossRef] [Green Version]
  18. Sun, Y.; Sun, Q.; Zhang, X. Existence and nonexistence of positive solutions for a higher-order three-point boundary value problem. Abstr. Appl. Anal. 2014, 2014, 1–7. [Google Scholar] [CrossRef]
  19. Hao, X.; Liu, L.; Wu, Y. Iterative solution to singular nth-order nonlocal boundary value problems. Boundary Val. Prob. 2015, 2015, 1–10. [Google Scholar] [CrossRef] [Green Version]
  20. Eloe, P.W.; Neugebauer, J.T. Avery Fixed Point Theorem applied to Hammerstein integral equations. Electr. J. Diff. Equ. 2019, 2019, 1–20. [Google Scholar]
  21. Webb, J.R.L. New fixed point index results and nonlinear boundary value problems. Bull. Lond. Math. Soc. 2017, 49, 534–547. [Google Scholar] [CrossRef] [Green Version]
  22. Greguš, M. Third Order Linear Differential Equations; Mathematics and its Applications; Springer: Groningen, The Netherlands, 1987; Volume 22. [Google Scholar]
  23. Jiang, D.; Yuan, C. The positive properties of the Green function for Dirichlet-type boundary value problems of nonlinear fractional differential equations and its application. Nonlinear Anal. Theory Methods Appl. 2010, 72, 710–719. [Google Scholar] [CrossRef]
  24. Wang, Y.; Liu, L. Positive properties of the Green function for two-term fractional differential equations and its application. J. Nonlinear Sci. Appl. 2017, 10, 2094–2102. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, L.L.; Tian, H. Existence and uniqueness of positive solutions for a class of nonlinear fractional differential equations. Adv. Diff. Equ. 2017, 2017, 1–19. [Google Scholar] [CrossRef] [Green Version]
  26. Wang, Y. The Green’s function of a class of two-term fractional differential equation boundary value problem and its applications. Adv. Diff. Equ. 2020, 2020, 1–20. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Almenar Belenguer, P.; Jódar, L. The Sign of the Green Function of an n-th Order Linear Boundary Value Problem. Mathematics 2020, 8, 673. https://doi.org/10.3390/math8050673

AMA Style

Almenar Belenguer P, Jódar L. The Sign of the Green Function of an n-th Order Linear Boundary Value Problem. Mathematics. 2020; 8(5):673. https://doi.org/10.3390/math8050673

Chicago/Turabian Style

Almenar Belenguer, Pedro, and Lucas Jódar. 2020. "The Sign of the Green Function of an n-th Order Linear Boundary Value Problem" Mathematics 8, no. 5: 673. https://doi.org/10.3390/math8050673

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop