1. Introduction
It is well known that the classical orthogonal polynomials (i.e., Jacobi, Laguerre, and Hermite) satisfy a second-order differential equation with polynomial coefficients, and its zeros are simple. Based on these facts, Stieltjes gave a very interesting interpretation of the zeros of the classical orthogonal polynomials as a solution of an electrostatic equilibrium problem of 
n movable unit charges in the presence of a logarithmic potential (see [
1] Sec. 3). An excellent introduction to Stieltjes’ result on this subject and its consequences can be found in ([
1] Sec. 3) and ([
2] Sec. 2). See also the survey [
3] and the introduction of [
4,
5].
In order to make this paper self-contained, it is convenient to briefly recall the Jacobi, Laguerre, and Hermite cases. We begin with Jacobi. Let us consider 
n unit charges at the points 
 distributed in 
 and add two positive fixed charges of mass 
 and 
 at 1 and 
, respectively. If the charges repel each other according to the logarithmic potential law (i.e., the force is inversely proportional to the relative distance), then the total energy 
 of this system is obtained by adding the energy of the mutual interaction between the charges. This is
      
The minimum of (
1) gives the electrostatic equilibrium. The points 
 where the minimum is obtained are the places where the charges will settle down. It is obvious that, for the minimum, all the 
 are distinct and different from 
.
For a minimum, it is necessary that 
 (
), from which it follows that the polynomial 
 satisfies the differential equation
      
      which is the differential equation for the monic Jacobi polynomial 
 (see [
6] (Theorems 4.2.2 and 4.21.6)). The proof of the uniqueness of the minimum, based on the inequality between the arithmetic and geometric means, can be found in [
6] (Section 6.7). In conclusion, the global minimum of (
1) is reached when each of the 
n charges is located on a zero of the 
nth Jacobi polynomial 
.
For the other two families of classical orthogonal polynomials on the real line (i.e., Laguerre and Hermite), Stieltjes also gave an electrostatic interpretation. Since, in this situation, the free charges move in an unbounded set, they can escape to infinity. Stieltjes avoided this situation by constraining the first (Laguerre) or second (Hermite) moment of his zero-counting measures (see [
6] (Theorems 6.7.2 and 6.7.3) and [
1] (Section 3.2)).
The electrostatic interpretation of the zeros of the classical orthogonal polynomials, in addition to Stieltjes, was also studied by Bôcher, Heine, and Van Vleck, among others. These works were developed between the end of the 19th century and the beginning of the 20th century. After that, the subject remained dormant for almost a century, until it received new impulses from advances in logarithmic potential theory, the extensions of the notion of orthogonality, and the study of new classes of special functions.
Let 
 be a finite positive Borel measure with finite moments whose support 
 contains an infinite set of points. Assume that 
 denotes the monic orthogonal polynomial sequence with respect to the inner product
      
In general, an inner product is referred to as “standard” when the multiplication operator exhibits symmetry with respect to the inner product, i.e., 
. As (
3) is a standard inner product, we have that 
 has exactly 
n simple zeros on 
, where 
Ch(
A) denotes the convex hull of a real set 
A and 
 denotes the interior set of 
A. Furthermore, the sequence 
 satisfies the three-term recurrence relation
      
      where 
 for 
, 
, and 
 denotes the norm induced by (
3). See [
6,
7,
8] for these and other properties of 
.
Let 
 be as above, 
, 
, for 
, 
, 
, where 
 if 
 and 
. We consider the following Sobolev-type inner product:   
      where 
 denotes the 
kth derivative of the function 
f. We also assume, without restriction of generality, that 
 and 
. Let us denote by 
 the lowest degree monic polynomial that satisfies
      
Henceforth, we refer to the sequence 
 of monic polynomials as the system of monic Sobolev-type orthogonal polynomials. It is not difficult to see that for all 
, there exists a unique polynomial 
 of the degree 
n. Note that the coefficients of 
 are the solution of a homogeneous linear system (
5) of 
 unknowns and 
n equations. The uniqueness is a consequence of the required minimality on the degree. For more details on this type of nonstandard orthogonality, we refer the reader to [
9,
10].
It is not difficult to see that, in general, (
4) is nonstandard, i.e., 
. The properties of orthogonal polynomials concerning standard inner products are distinct from those of Sobolev-type polynomials. For instance, the roots of Sobolev-type polynomials either can be complex or, if real, might lie beyond the convex hull of the measure 
 support, as demonstrated in the following example:
Example 1.  Letthen the corresponding third-degree monic Sobolev-type orthogonal polynomial is , whose zeros are 0 and . Note that .  We will denote by 
 the linear space of all polynomials and by 
 the degree of 
. Let
      
Note that 
 for all 
 and 
. Additionally, for 
, from (
5), we have that 
 satisfies the following quasi-orthogonality relations:
      for 
, where 
 denotes the linear space of polynomials with real coefficients and degree less than or equal to 
. Thus, 
 is a quasi-orthogonal of order 
d with respect to the modified measure 
. Therefore, 
 has at least 
 changes of sign in 
.
Taking into account the known results for measures of bounded support (see [
11] (1.10)), the number of zeros located in the interior of the support of the measure is closely related to 
, where the symbol 
 denotes the cardinality of a given set 
A. Note that 
 is the number of terms in the discrete part of 
 ( i.e., 
).
From 
Section 3 onward, we will restrict our attention to the case when in (
4) the measure 
 is the Jacobi measure 
 (
) on 
. Some of the results we obtain are generalizations of previous work, with derivatives up to order one. For more details, we refer the reader to [
12,
13] and the references therein.
The aim of this paper is to give an electrostatic interpretation for the distribution of zeros of a wide class of Jacobi-Sobolev polynomials, following an approach based on the works [
4,
14,
15] and the original ideas of Stieltjes in [
16,
17].
In the next section, we obtain a formula that allows us to express the polynomial 
 as a linear combination of 
 and 
, whose coefficients are rational functions. We refer to this formula as “connection formula”. 
Section 3 and 
Section 4 deal with the ladder (raising and lowering) equations and operators of 
. We combine the ladder (raising and lowering) operators to prove that the sequence of monic polynomials 
 satisfies the second-order linear differential Equation (
35), with polynomial coefficients.
In the last section, we give a sufficient condition for an electrostatic interpretation of the distribution of the zeros of  as the logarithmic potential interaction of unit positive charges in the presence of an external field. Several examples are given to illustrate whether or not this condition is satisfied.
  2. Connection Formula
Let 
 be a finite positive Borel measure with finite moments, whose support 
 contains an infinite set of points. Assume that 
 denotes the monic orthogonal polynomial sequence with respect to the inner product (
3). We first recall the well-known Christoffel-Darboux formula for 
, the kernel polynomials associated with 
.
      
We denote by 
 the partial derivatives of the kernel (
6). Then, from the Christoffel-Darboux Formula (
6) and the Leibniz rule, it is not difficult to verify that
      
      where 
 is the Taylor polynomial of the degree 
k of 
f centered at 
y. Observe that (
7) becomes the usual Christoffel-Darboux formula (
6) if 
.
Therefore, from the Fourier expansion of 
 in terms of the basis 
 and using (
8), we obtain
      
Now, replacing (
7) in (
9), we have the connection formula
      
Deriving Equation (
9) 
ℓ-times and evaluating then at 
 for each ordered pair 
, we obtain the following system of 
 linear equations and 
 unknowns 
.
      
The remainder of this section is devoted to proving that system (
11) has a unique solution. The following lemma is essential to achieve this goal.
Lemma 1.  Let  be a (finite) set of  pairs. Denote  where  is the projection function over the first coordinate, i.e., ,  and . Let  be an arbitrary polynomial of the degree k for . Then, for all , the  matrixhas a full rank .  Proof.  First, note that, using elementary column transformations, we can reduce the proof to the case when 
, for 
. On the other hand, 
 for 
, so 
, and it is sufficient to prove the case 
. Consider the 
 matrix
        
        where 
. Without loss of generality, we can rearrange the rows of 
 such that
        
Note that 
 is obtained by taking some rows from 
, the rows 
, such that 
. Consider the matrix
        
From [
18] (Theorem 20), we compute 
 as
        
Then the n row vectors of  are linearly independent, and consequently, the  rows of  are also linearly independent.    □
 Now we can rewrite (
11) in the matrix form
      
- 
          is the identity matrix of the order . 
- 
          is the -diagonal matrix with the diagonal entries , . 
- 
          is the column vector . 
- 
          and  are column vectors with the entries , and ,  respectively. 
- 
          is a  matrix whose entry associated to the th row and the th column, , is   
Clearly, we can write  where  is a matrix of the order  and full rank for all , according to Lemma 1.
Then the matrix 
 is a 
 positive definite matrix for all 
; see [
19] (Theorem 7.2.7(c)). Since 
 is a diagonal matrix with positives entries, it follows that 
 is also a positive definite matrix, and consequently, 
 is nonsingular. Then the linear system (
12) has the unique solution
      
Using this notation, we can rewrite (
9) in the compact form
      
      where 
 is a row vector with the entries 
, for 
. Now, replacing (
13) into (
14), we obtain the matrix version of the connection Formula (
10)
      
  3. Ladder Equations for Jacobi-Sobolev Polynomials
Henceforth, we will restrict our attention to the Jacobi-Sobolev case. Therefore, we consider in the inner product (
4) the measure 
, where 
 and whose support is 
. To simplify the notation, we will continue to write 
 instead of 
 to denote the corresponding 
nth Jacobi-Sobolev monic polynomial. In the following, we omit the parameters 
 and 
 when no confusion arises.
From [
6] ((4.1.1), (4.3.2), (4.3.3), (4.5.1), and (4.21.6)), for the monic Jacobi polynomials, we have
      
      where
      
Let 
 be the identity operator. We define the two ladder Jacobi differential operators on 
 as
      
      where
      
From [
6] (4.5.7 and 4.21.6), if 
, the sequence 
 satisfies the relations
      
In this case, the connection Formula (
10) becomes
      
Let 
 and define the 
th degree polynomial
      
      for every 
. The following four lemmas are essential for defining ladder operators (lowering and raising operators).
Lemma 2.  For the sequences of polynomials  and , we obtainwherewhere , , , and  are polynomials of degree at most d, ,  and d, respectively, and the coefficients , , , and  are given by (17).  Proof.  From (
19) and (
20), Equation (
21) is immediate. To prove (), we can take derivatives with respect to 
x in both hand sides of (
21) and then multiply by 
Using (
18) in the above expression, we obtain
        
        which is (
22).    □
 Lemma 3.  The sequences of the monic polynomials  and  are also related by the equationswherewhere , , , and  are polynomials of degree at most , d, d and , respectively.  Proof.  The proof of (
23) and (
24) is a straightforward consequence of Lemma 2 and the three-term recurrence relation (
15), whose coefficients are given in (
16).    □
 Lemma 4.  The monic orthogonal Jacobi polynomials can be expressed in terms of the monic Sobolev-type polynomials  in the following way:whereis a polynomial of the degree .  Proof.  Note that (
21) and (
23) form a system of two linear equations with the two unknowns 
 and 
. Therefore, from Cramer’s rule, we obtain (
25) and (
26).
As 
 and 
, we obtain
        
        where 
. From (
12),
        
		Since the matrix 
 is positive definite, we conclude that
        
        i.e., 
 is a polynomial of the degree 
.    □
 Remark 1.  Obviously, from (25) (or (26)), we have that , where  is a polynomial of the degree d. Hence, from (27),  Theorem 1.  Under the above assumptions, we have the following ladder equations:where
      
        
      
      
      
      
       Proof.  Replacing (
25) and (
26) in (
22) and (
24), the two ladder Equations (
30) and (
31) follow.
        
- (1)
- 
			where, according to ( 29- ),  - , i.e.,  
- (2)
- From ( 28- ),  - .
             - 
            where, according to ( 29- ),  - , i.e.,  - . 
- (3)
- 
            Then, according to ( 29- ),  - . 
- (4)
- 
            where, according to ( 29- ),  - , i.e.,  - . 
        □
 In the previous theorem, the polynomials 
 were defined. Note that these polynomials are closely related to certain determinants. The following result summarizes some of their properties that will be of interest later. For brevity, we introduce the following notations:
Lemma 5.  Let  and . Then, the above polynomial determinants admit the following decompositions:  Proof.  Multiplying (
21) by 
 and (
22) by 
 and taking their difference, we have
        
As  for  and  (see the proof of Theorem 1), then there exists a polynomial  of the degree  such that .
For the decomposition of 
 (
) the procedure of the proof is analogous, using the linear system of (
22) and (
23) ((
21)–(
24)).    □
   4. Ladder Jacobi-Sobolev Differential Operators and Consequences
Definition 1  (Ladder Jacobi-Sobolev differential operators). 
Let  be the identity operator. We define the two ladder differential operator on  as Remark 2.  Assume in (4) that , whose support is  and  for all pairs . Under these conditions, it is not difficult to verify that  and .  Now, we can rewrite the ladder Equations (
30) and (
31) as
      
In this section, we state several consequences of Equations (
33) and (
34), which generalize known results for classical Jacobi polynomials to the Jacobi-Sobolev case.
First, we are going to obtain a second-order differential equation with polynomial coefficients for 
. The procedure is well known and consists in applying the raising operator 
 to both sides of the formula 
. Thus, we have
      
      from where we conclude the following result.
Theorem 2.  The nth monic orthogonal polynomial with respect to the inner product (4) is a polynomial solution of the second-order linear differential equation, with polynomial coefficientswhere  Remark 3  (The classical Jacobi differential equation). 
Under the conditions stated in Remark 2, (4) becomes to the classical Jacobi inner product and (x).Note that, here, ,  and . For the rest of the expressions involved in the coefficients of the differential Equation (35), we haveThus, Substituting (37) in (36), the reader can verify that the differential Equation (35) becomes (2), i.e.,  Second, we can obtain the polynomial nth degree of the sequence  as the repeated action (n times) of the raising differential operator on the first Sobolev-type polynomial of the sequence (i.e., the polynomial of degree zero).
Theorem 3.  The nth Jacobi-Sobolev polynomial  can be given bywhere .  Proof.  Using (
34), the theorem follows for 
. Next, the expression for 
 is a straightforward consequence of the definition of the raising operator.    □
 To conclude this section, we prove an interesting three-term recurrence relation with rational coefficients, which satisfies the Jacobi-Sobolev monic polynomials. From the explicit expression of the ladder operators, shifting 
n to 
 in (
34), we obtain
      
Next, we multiply the first equation by  and the second equation by , and adding two resulting equations, we have the following three-term recurrence reaction with rational coefficients for the Jacobi-Sobolev monic orthogonal polynomials.
Theorem 4.  Under the assumptions of Theorem 2, we have the recurrence relationwhere the explicit formula of the coefficient is given in Theorem 1.  Proof.  From (
30), and (
31) for 
, we have
        
Multiplying by 
 and 
, respectively, we subtract both equations to eliminate the derivative term obtaining
        
        which is the required formula.    □
 Remark 4  (The classical Jacobi three-term recurrence relation). 
Under the assumptions of Remark 2, substituting (37) in (38), the reader can verify that the three-term recurrence relation (38) becomes (35), i.e.,   5. Electrostatic Interpretation
Let us begin by recalling the definition of a sequentially ordered Sobolev inner product, which was stated in [
20] (Definition 1) or [
21] (Definition 1).
Definition 2.  Let  be a finite sequence of M ordered pairs and . We say that  is sequentially ordered with respect to A, if
- 1. 
- . 
- 2. 
-  for , where  denotes the interior of the convex hull of an arbitrary set . 
If , we say that  is sequentially ordered for brevity.
We say that the discrete Sobolev inner product (4) is sequentially ordered if the set of ordered pairs  may be arranged to form a finite sequence of ordered pairs, which is sequentially ordered with respect to .  From the second condition of Definition 2, the coefficient 
 is the only coefficient 
 (
) different from zero, for each 
. Hence, (
4) takes the form
      
      where 
, with 
.
Hereinafter, we will restrict our attention to sequentially ordered discrete Sobolev inner products. The following two lemmas show our reasons for this restriction.
Lemma 6  (([
20], Th. 1) and ([
21], Prop. 4)). 
If (39) is a sequentially ordered discrete Sobolev inner product, then  has at least  changes of sign on . Lemma 7  (([
20], Lem. 3.4) and ([
21], Th. 7)). 
Let (39) be a sequentially ordered Sobolev inner product. Then, for all n sufficiently large, each sufficiently small neighborhood of , , contains exactly one zero of , and the remaining  zeros lie on . As the coefficient of  is real, under the same hypotheses of Lemma 7, for all n sufficiently large, the zeros of  are real and simple.
In the rest of this section, we will assume that the zeros of 
 are simple. Note that sequentially ordered Sobolev inner products provide us with a wide class of Sobolev inner products such that the zeros of the corresponding orthogonal polynomials are simple. Therefore, for all 
n sufficiently large, we have
      
Now we evaluate the polynomials 
, and 
 in (
35) at 
, where 
 are the zeros of 
 arranged in an increasing order. Then, for 
, we obtain
      
Let us recall that, from (
32),
      
Hence, from Theorems 1 and 2 and Lemma 5,
      
Let us write 
As 
 and 
 are polynomials of the degree 
 and 
, respectively, we have that 
 is a rational proper fraction. Therefore,
      
Based on the results of our numerical experiments, in the remainder of the section, we will assume certain restrictions with respect to some functions and parameters involved in (
41). In that sense, we suppose that
- 1.
- The zeros of  are real, simple, and different from  for all . Therefore, , where  if , and  
- 2.
- Let , where  for all , and . Therefore,  
- 3.
- Substituting into ( 41- ) the previous decompositions, we have
           - 
          where  - ,  - ,  - , and  - . We will assume that  - . 
From (
40), for 
,
      
Let 
, 
 and denote
      
Let us introduce the following electrostatic interpretation:
Consider the system of n movable positive unit charges at n distinct points of the real line, , where their interaction obeys the logarithmic potential law (that is, the force is inversely proportional to the relative distance) in the presence of the total external potential . Then,  is the total energy of this system.
Following the notations introduced in [
14] (Section 2), the Jacobi-Sobolev inner product creates two external fields. One is a long-range field whose potential is 
, and the other is a short-range field whose potential is 
. Therefore, the total external potential 
 is the sum of the short- and long-range potentials, which is dependent on 
n (i.e., varying external potential).
Therefore, for each , we have ; i.e., the zeros of  are the zeros of the gradient of the total potential of energy  ().
Theorem 5.  The zeros of  are a local minimum of , if for all 
- 1. 
- . 
- 2. 
 Proof.  The Hessian matrix of 
E at 
 is given by
        
Note that (
44) is a symmetric real matrix with negative values in the nondiagonal entries. Additionally, note that
        
Since this is positive, we conclude according to Gershgorin’s theorem [
19] (Theorem 6.1.1) that the eigenvalues of the Hessian are positive, and therefore, (
44) is positive definite. Combining this with the fact that 
, we conclude that 
 is a local minimum of (
43).    □
 The computations of the following examples have been performed using the symbolic computer algebra system 
Maxima [
22]. In all cases, we fixed 
 and considered sequentially ordered Sobolev inner products (see Definition 2 and Lemmas 6 and 7). From (
42), it is obvious that 
, where 
 and 
 for 
. Under the above condition, 
 is a local minimum (maximum) of 
E if the corresponding Hessian matrix at 
 is positive (negative) definite; in any other case, 
 is said to be a saddle point. We recall that a square matrix is positive (negative) definite if all its eigenvalues are positive (negative).
Example 2  (Case in which the conditions of Theorem 5 are satisfied).
        
- 1. 
- Jacobi-Sobolev inner product . 
- 2. 
- 3. 
- Total potential of energy , where 
- 4. 
- From (42),  for . 
- 5. 
- Computing the corresponding Hessian matrix at , we have that the approximate values of its eigenvalues are 
Thus, Theorem 5 holds for this example, and we have the required local electrostatic equilibrium distribution.
 Example 3  (Case in which the conditions of Theorem 5 are satisfied).
        
- 1. 
- Jacobi-Sobolev inner product 
- 2. 
- 3. 
- Total potential of energy , where 
- 4. 
- From (40),  for . 
- 5. 
- Computing the corresponding Hessian matrix at , we have that the approximate values of its eigenvalues are 
Thus, Theorem 5 holds for this example, and we have the required local electrostatic equilibrium distribution.
 Example 4  (Case in which the conditions of Theorem 5 are not satisfied).
        
- 1. 
- Jacobi-Sobolev inner product . 
- 2. 
- 3. 
- Total potential of energy , where 
- 4. 
- From (42),  for . 
- 5. 
- Computing the corresponding Hessian matrix at , we have that the approximate values of its eigenvalues are 
Then,  is a saddle point of .
 Remark 5.  As can be noticed, in some cases, the configuration given by the external field includes complex points; they correspond to . Specifically, in the examples, these points are given as the zeros of . Since  is a polynomial of real coefficients, the nonreal zeros arise as complex conjugate pairs. Note thatwhere  denotes the real part of z. The antiderivative of the previous expression is . This means in our current case that the presence of complex roots does not change the formulation of the energy function.    What Happens If the Hessian Is Not Positive Definite? A Case Study
Theorem 5 gives us a general condition to determine whether the electrostatic interpretation is a mere extension of the classical cases. However, in Example 4, the Hessian has one negative eigenvalue of about 
 corresponding to the last variable 
. Therefore, we do not have the nice interpretation given in Theorem 5. However, note that the rest of the eigenvalues are positive, which means that the number
        
        remains positive for 
. In this case, the potential function exhibits a saddle point. The presence of the saddle point is somehow justified by the attractor point 
 having a zero ( 
) in its neighborhood. In this case, we are able to give an interpretation of the position of the zeros by considering a problem of conditional extremes.
Assume that, when checking the Hessian, we obtained that the eigenvalues , for , are negative or zero. Without loss of generality, assume that this happens for the last  variables. This is a saddle point. However, the rest of the eigenvalues are positive, which means that the truncated Hessian  formed by taking the first  rows and columns of  is a positive definite matrix by the same arguments used in the proof of Theorem 5.
Let us define the following problem of conditional extremum on 
Note that this problem is equivalent to solve
        
Let us prove that 
 is a minimum of this problem. Note that the gradient of this function corresponds to the first 
 conditions of (
42), and the second-order condition is given by the truncated Hessian 
, which is by hypothesis positive definite.
Therefore, the configuration 
 corresponds to the local equilibrium of the energy function (
43) once 
 charges are fixed.