1. Introduction and the Problem Statement
Among the nonlocal differential equations, which are the subject of many works, a special place is occupied by equations with involutive deviations of the argument. An involution is a mapping 
 such that 
. It should be noted monographs [
1,
2,
3] from a variety of research papers in this direction. Refs. [
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15] are devoted to the questions of solvability of boundary and initial-boundary value problems for differential equations with involution. Spectral questions of differential equations with involution are studied in [
16,
17,
18,
19,
20,
21,
22,
23,
24,
25]. For example, in [
22], the following boundary value problem
      
      is studied. The eigenfunctions and eigenvalues of this problem are given explicitly. The system of eigenfunctions is complete in 
.
In [
10], the following nonlocal analog of the Laplace operator is introduced
      
      where 
 is the Laplace operator, 
 for 
 are real numbers, 
S is an 
 orthogonal matrix for which there exists a number 
 such that 
 and 
I is the identity matrix. In the paper [
10] cited above, for the corresponding nonlocal Poisson equation 
 in the unit ball 
, the solvability questions for some boundary value problems with different boundary conditions are studied. The corresponding spectral problem for the Dirichlet boundary value problem is studied in [
14]. In that work, as in the case of the one-dimensional problem from [
22], the eigenfunctions and eigenvalues of the considered problem are obtained explicitly. A theorem on the completeness of the system of eigenfunctions in the space 
 is proved.
Furthermore, in [
24], a nonlocal Laplace operator with multiple involution of the following form is introduced:
      where 
 are real numbers, 
 is a representation of the index 
i in the binary number system, 
 are orthogonal 
 matrices satisfying the condition 
, 
. In this paper [
24], the explicit form of the eigenfunctions and eigenvalues of the corresponding Dirichlet problem
      
      is given and the completeness of the system of eigenfunctions in the space 
 is proved.
In [
26], one boundary value problem for the biharmonic equation is studied. This problem contains modified Hadamard integrodifferential operators in the boundary conditions.
In the present paper, continuing the above studies of the solvability of boundary value problems for harmonic and biharmonic equations with both ordinary involution and multiple involution, we are going to investigate similar issues for the Laplace operator with double involution of arbitrary orders. Special form matrices arising in the considered problem are investigated in Theorems 1–3 of 
Section 2. Then, in 
Section 3 (see Theorems 4 and 5), with the help of Lemma 1, the existence of eigenfunctions and eigenvalues of the problem under consideration is investigated. In 
Section 4 (see Theorems 6 and 7), with the help of Lemma 2, the eigenfunctions and eigenvalues of the considered nonlocal differential equation are constructed. These eigenfunctions are presented explicitly. The completeness of the resulting system of eigenfunctions in 
 is established. All new concepts and results obtained are illustrated by seven examples.
Let 
 be the unit ball in 
, 
, and 
 be the unit sphere. Let also 
 be two real commutative orthogonal 
 matrices such that 
, 
, 
. Note that, since 
, then 
 and 
. For example, the matrix 
 can be an orthogonal matrix of the following form:
      where 
, 
, and 
 are zero matrices of appropriate size. It is clear that 
.
Let  and , ,  be a sequence of real numbers which we denote by . If we represent the index i in the form , where  for , then the elements of  can be represented as , , . It is clear that, if , then , , where  and  are integer and fractional parts of a number. Furthermore, we consider the sequence  also as a vector.
We introduce a new nonlocal differential operator formed by the sequence 
 and the Laplace operator 
      and formulate a natural boundary value problem with 
.
Problem . 
Find a non-zero function  such that  and which satisfied the equationswhere . In a special case 
, this problem coincides with the spectral boundary value problem studied in [
24].
  2. Auxiliary Results
In order to start studying the above problem (
1) and (2), we need some auxiliary assertions. We introduce the function
      
      where the summation is carried out over the index 
 in the form 
. From equality (
3), taking into account that 
, it can be concluded that the following functions 
, where 
 are expressed as a linear combinations of the functions 
. Let us introduce the following vectors:
      of order 
. Then, the dependence 
 on 
 can be presented in the matrix form:
      where 
 is some matrix of order 
.
Let us investigate the structure of matrices of the form 
. For this, we introduce a new operation on indices of matrix coefficients as follows: 
, where 
 is a representation of the index 
i as mentioned above. It is clear that ⊕ is a commutative and associative operation on 
 and 
. Since 
, then we can write 
. For example, if 
, 
, then 
 or 
. If we assume that 
, then we have
      
      i.e., the operation ⊕ is formally applicable to numbers of the form 
. We assume that
      
We extend the operations ⊕ and ⊖ to all numbers of the form  by setting . For example, if , , then  and .
Theorem 1. The matrix  defined by the equality (4) is represented as The sum of matrices of the form (5) is a matrix of the same form.  Proof.  Consider the function 
 whose coefficients at 
 make up the 
th row of the matrix 
Here, the following properties 
, 
 and 
 of matrices 
 and 
 have been used. If we replace the index 
j by the index 
k using the equality 
, then 
, and we have 
. Substitution 
 changes the order of summation in (
6). For instance, if 
, 
 and 
, then 
 goes to 
. After changing the index, we have
        
Comparing the resulting equality with (
4), we make sure that (
5) is true 
.
There is no doubt that, if 
, then
        
        which completes the proof of the theorem.    □
 Example 1. For example, let us write the matrix  for  and 
      
        
      
      
      
      
       Let us present some corollaries of Theorem 1.
Corollary 1. The matrix  is uniquely determined by its first row .
 It is not difficult to see that the i-th row of the matrix  is represented via its 1st row as . We indicate this property of the matrix  by the equality . For example, the matrix  from Example 1 can be written as , where .
In Ref. [
10], matrices of the following form are studied:
      which coincide with the matrix 
 in the case 
, 
 since, in this case, 
 and 
.
Corollary 2. The matrix  has the structure of a matrix consisting of  square blocks, each of which is an  matrix of type . If we represent the sequence  as , where  and denote , then the equality is true  Proof.  Obviously, the block matrix on the right side of (
8) has size 
. Denote its arbitrary element as 
, where 
. If we write 
, 
, then we have 
. This means that the element 
 is in the 
th block column and in the 
th block row of this block matrix. Therefore, in accordance with the structure of the matrix 
, we have 
. If we now take into account the values of the indices 
 and 
, which mean that the element 
 is in the 
-th column and in the 
-th row of the matrix 
, then 
, where 
 is the element of the 1st row of the matrix 
. Since from the definition of 
 it follows that 
, 
, then we have
        
Taking into account equality (
5), from Theorem 1, this implies the equality of matrices (
8). This proves the corollary.    □
 Example 2. We can see the property (8) of matrices of the form  by the matrix  from Example 1, where , , . If denote , ,  andthen  and the matrix  is written as  Corollary 3. The transposed matrix  has the structure of the matrix , and besides , where , and both index components  and  are taken by  and by , respectively.
 Proof.  This is why . The corollary is proved.    □
 Example 3. For the matrix  from Example 2, we have  Is the multiplication of matrices of the form (
5) again such a matrix?
Theorem 2. The product of matrices of form (5) is again the same matrix and multiplication is commutative.  Proof.  Let the matrices 
 and 
 be two matrices of the form (
5). Then,
        
As in the proof of Theorem 1, let us change the index 
; in the sum above, in accordance with the equation 
. Then, 
 and therefore the correspondence 
 is one-to-one. Thus, the index replacement 
 changes only the summation order. Due to the commutativity and associativity of ⊕, we obtain
        
The elements of the first row of the resulting matrix have the form 
 and hence 
. Therefore, matrix 
 has the form (
5).
The commutativity of the product 
 can be easily obtained from the equality (
9). Replacing 
 in the last sum from (
9) and hence 
, we obtain
        
The last sum in this equality is a common element of the matrix 
. Hence, (
9) means that 
. The theorem is proved.    □
 Let 
, 
 be the 
lth root of unity. In [
10], it is shown that, for the matrix of the form
      
      where 
, eigenvectors and eigenvalues have the form
      
Note that the eigenvectors of the matrices  do not depend on the vector . Is this true for matrices of type  We will see below.
We present a theorem that clarifies questions about eigenvectors and eigenvalues of matrices 
 from (
5).
Theorem 3. Eigenvectors of the matrix  are written aswhere  is identity  matrix, vector  is taken from (10) at  and  is the th root of unity, , . The eigenvectors of the matrix  do not depend on the vector .  Proof.  In accordance with Theorem 1, represent the matrix 
 as
        
        where 
, 
. Consider the block multiplication of a 
 matrix by a 
 matrix of the form
        
        where square blocks 
 have size 
. Let us extend the values of the upper indices of the matrices 
 to 
 and calculate them by 
. Then, similarly to (
7), the 
mth block row of the matrix 
 we represent as 
. Since the exponent of 
 can also be calculated by 
, then we write
        
Here, the substitution 
 of the index has been made. Thus, we have
        
It is easy to see that, in the obtained equality, the matrix 
 has a type of 
 and hence the vectors 
, for 
 are eigenvectors of 
. We multiply the above matrix equality on the right by the vector 
. Then, we obtain
        
        where 
 is the eigenvalue of the matrix 
 corresponding to the eigenvector 
. If we now recall the notation (
11), then we obtain
        
        i.e., 
 is an eigenvector of 
. This completes the proof.    □
 Now, present some corollaries from Theorem 3 that make it possible to construct eigenvectors and eigenvalues of the matrix .
Corollary 4. . The eigenvector of  numbered by  one can write aswhere  is the th root of unity and  is the th root of unity. The eigenvalue corresponding to this eigenvector can be written in a similar form . The eigenvectors of the matrix  coincide with the eigenvectors , and the corresponding eigenvalues have the form .
 Proof.  It is easy to see that the Equality (
11) can be written as
        
        where the order of the vector elements corresponds to the order established for numbers 
. This proves equality (
13).
Furthermore, from Theorem 3, it follows that the eigenvalue 
 of the matrix 
 corresponding to the eigenvector 
 is the same as the eigenvalue of the matrix
        
        corresponding to the eigenvector 
. Here, 
. Since the matrix 
 is of type 
, then, in accordance with (
10), we find the vector representing the first row of the matrix 
. Denote this vector as 
Using the formula (
10), we find
        
        which is the same as (
14). Statement 
 is proved.
By Corollary 3 and Theorem 3, the eigenvectors of 
 coincide with the eigenvectors of 
. Let us find the eigenvalue corresponding to the vector 
. According to Corollary 3 
, where 
. This is why
        
It can be seen from the last equality that . Statement  is proved and hence the corollary is proved.    □
 Another property of the eigenvectors of the matrix  is given later in Corollary 7.
Remark 1. The expression  is an ordered pair, and the first place in it is  – the -th root of unity, and the second place in it is  – the th root of unity. Therefore, in the general case, .
 Remark 2. In ([24], Corollary 3), eigenvectors and eigenvectors of matrices, which for  are a special case of matrices , are obtained. The eigenvectors were written aswhich is the same as (13) for . Indeed, in this case , ,  and hence the common term of the eigenvector from (13) has the formwhich coincides with the common term of the vector . The eigenvalues obtained in (14) also coincide with those found in [24] for .  Example 4. For the matrix  from Example 2, we have , , where . Therefore, according to the formula  (13), we obtain
      
        
      
      
      
      
     and, using the formula  (14), we calculate    3. The Problem S
To consider Problem , we need the following statement.
Lemma 1. ([10], Lemma 3.1) Let S be an orthogonal matrix, then the operator  and the Laplace operator Δ satisfy the equality  for . The operator  and operator  also satisfy the equality  for .  Corollary 5. Equation (1) generates a matrix equation which is equivalent to itwhere  and .  Proof.  Let the function 
 be a solution to equation (
1). Let us denote
        
        and 
 The function 
 generates equality (
4). Let us apply the Laplace operator 
 to (
4). Since the matrices of the form 
 are orthogonal, by Lemma 1, we obtain
        
In the transformations made, the replacement of the summation index 
 was used. Hence, using the equality 
 (see (
1)), which implies that 
, we easily obtain (
15). Finally, note that the first equation in (
15) is the same as (
1). This completes the proof.    □
 Using Lemma 1, we are going to state the existence of the eigenvalues of Problem .
Theorem 4. Assume that the non-zero function  is an eigenfunction of Problem , and λ is its eigenvalue corresponding to . The functionwhere  and  is an eigenvector of the matrix  such that  is a solution to the boundary value problemwhere .  Proof.  Let us take 
 a non-zero eigenfunction of Problem 
 and the corresponding eigenvalue 
. In accordance with Corollary 5, equality (
15) holds. If we multiply this equality by the vector 
 scalarly, then we obtain
        
        where we find
        
Since, due to Corollary 4, the vector 
 is also an eigenvector of the matrix 
, and 
 is its eigenvalue, then we have
        
        and, since 
, we obtain
        
        where, because 
, we obtain the equality (
16)
        
Lastly, because , for , and , then we have  for . Therefore, we obtain , for . This completes the proof.    □
 Let us prove the assertion converse to Theorem 4. It provides an opportunity to find solutions to the main Problem .
Theorem 5. Assume that the non-zero function  is a solution of the boundary value problem (16) and (17) for some then the function  determined from the equalitywhereand the vector  from (13) is an eigenvector of the matrix  with an eigenvalue , which is a solution to Problem  for .  Proof.  Let 
 be a solution to the problem (
16) and (17). Consider the vector 
 and compose the function
        
        where 
. It is not difficult to see that, according to Corollary 4, we have in 
Therefore, again by Corollary 4,
        
Here, we used the substitution of indexes 
. Thus,
        
        and therefore because, by Lemma 1,
        
        we obtain
        
Considering the first component of this equality, we obtain
        
        which means that the function 
 satisfies the equation (
1).
Let us make sure that the boundary conditions (2) are met. Since 
, then, for 
, we have
        
Thus, the function  is a solution to Problem . This completes the proof.    □
 Example 5. Consider the problem (1) and (2) with ,  and . Let us use Theorem 5. To do this, take the eigenvectors of the matrix  in the form (13) from Example 4. Let μ be an eigenvalue of the boundary value problem (16) and (17) and  be a corresponding eigenfunction. Then, the eigenfunctions of the problem (1) and (2) corresponding to μ can be taken in the form  (18): If we use the eigenvalues of the matrix  from (13), then the eigenvalues of the problem (1) and (2) corresponding to the eigenfunctions written above look like :  Next, we need to expand a given polynomial into a sum of “generalized parity” polynomials. Let 
 be some function defined on 
. Let us denote
      
Lemma 2. The function  has the “generalized parity” propertyand the following equalityholds true. In addition, the function  can be expanded in the form  Proof.  It is not hard to see that
        
        where, as in Theorem 2, the replacement of the index 
 is done. Therefore, equality (
20) holds true.
Consider now the equality (
22). It is easy to see that
        
Let us transform the inner sum from the right side of (
23). Let 
, then, for example, 
 which means 
. Taking into account that 
, by a simple combinatorial identity, we find
        
If 
, then 
, 
 and so 
Therefore, the expression on the right side of (
23) is equal to 
. This proves the equality (
22).
Now, let us prove (
21). It is not hard to see that, using (
20) and (
22), we can write
        
Since, by virtue of (
24), the formula
        
        holds true, then (
21) follows from the last equality. Here, the equalities 
 are taken into account. The lemma is proved.    □
 Example 6. Let , , . Taking into account that , , we obtain Let the function  be even in . Then, its components  of generalized parity  and  are zero.
Consider homogeneous harmonic polynomial  of degree m and let  be the polar coordinates of . Then, there exist  such thatand hence The operator  extracts the following components of the harmonic polynomial : Thus, for ,and the rest of the components vanish    4. Finding Solutions to Problem 
Let us rewrite the result of Theorem 5 in a more convenient form.
Theorem 6. Solutions to the boundary value problem (1) and (2) can be represented aswhere the operator  is defined in (19), the function  is a solution to the boundary value problem (16) and (17)for some . Eigenfunctions  for  and fixed μ are orthogonal in . The functions  are a part of the function  in the sense that  Proof.  Denote 
. It is clear that 
 is also an eigenfunction of the problem (
1) and (2). It is not hard to see that (
18) implies
        
        which proves the first formula from (
26).
The eigenvalues of the problem (
1) and (2) corresponding to eigenfunction 
, by Theorem 5 and (
14) from Corollary 4 can be taken in the form
        
We now prove that the functions 
 and 
 for 
 are orthogonal in 
. Indeed, if 
, then either 
 or 
. Let, for example, 
 and hence 
. Then, using Lemma 4.1 from [
10], we obtain the following equality for 
Therefore, by the Equality (
20) from Lemma 2, we have
        
Since 
, then this immediately implies the orthogonality
        
Finally, the equality (
27) is a consequence of the equality (
22) from Lemma 2 for 
. The theorem is proved.    □
 Corollary 6. If  is a harmonic polynomial with real coefficients, then the harmonic polynomials ,  are orthogonal and linearly independent.
 Proof.  Indeed, let 
, which is possible, for example, for 
 whence 
. By analogy with (
28) and according to Lemma 4.1 from [
10], we obtain
        
        where, because 
, we obtain the orthogonality of 
 and 
 on 
, and hence their linear independence. The corollary is proved.    □
 Corollary 7. The matrix  consisting of the eigenvectors of the matrix  is orthogonal and symmetric.
 Proof.  Let 
 and 
 be two different columns of the matrix 
. Then, using the equality 
, we write
        
If 
, then 
, which means that one of the equalities 
 or 
 holds true. Therefore, either 
 or 
, which means that, similarly to (
24), we obtain 
.
The symmetry of the matrix 
 follows from the equalities
        
The corollary is proved.    □
 Note that the matrix of eigenvectors in the case of multiple involution and for 
 has a similar property [
24].
Now, we can explore the completeness of the eigenfunctions of Problem .
Let  be the space of homogeneous harmonic polynomials of degree m. By Lemma 2, it can be split into a sum of  orthogonal on  subspaces ,  of homogeneous harmonic polynomials of parity . Let  be a complete in  system of orthogonal on  polynomials.
Theorem 7. Let the numbers  defined in (14) be all not zero. Then, the system of eigenfunctions of the Dirichlet problem (1) and (2) is complete in  and has the formwhere , , ,  is the Bessel function of the first kind, and  is a root of the Bessel function . The eigenvalues of Problem  are numbers , defined in (26)  Proof.  By Theorem 6, to find eigenfunctions of the problem (
1) and (2), it is necessary to find the function 
—a solution to the problem (
16) and (17) and then write out the function 
. A maximal system of eigenfunctions of the problem (
16) and (17) have the form (see, for example, [
27,
28])
        
        where 
, 
 (
) is the maximal system of linearly independent homogeneous harmonic polynomials of degree 
m, and 
 is a root of the Bessel function 
. Then, since 
, then
        
Since 
, then choose in the space 
 a complete system of polynomials 
 orthogonal on 
, to which correspond some polynomials from 
. Note that, for some value of 
m, it is possible 
, that is, for such 
m, the component 
 is missing (see Example 6) and therefore 
. Choosing in the resulting expression for 
 instead of 
 the harmonic polynomials 
 and adding indices, indicating the dependence of the eigenfunction 
 on 
, 
m and 
k, we have (
29).
Since, in formula (
29), 
 are homogeneous harmonic polynomials of degree 
m, the functions 
 have the form (
30) and hence are eigenfunctions of problem (
16) and (17). The reverse is also true. Each homogeneous harmonic polynomial 
 by the formula (
22) can be represented as a linear combination of harmonic polynomials of the form 
, and those are linear combinations of the polynomials 
 and hence any function from (
30) is a linear combination of functions of the form (
29). The eigenvalues of the problem (
1) and (2), in accordance with Theorem (6), are found from (
26).
Let us study the orthogonality of the functions 
. The equality holds true
        
Consider the right side of the obtained equality. For  and , due to the properties of the Bessel functions (orthogonality in ), the first factor is zero. If , by the property of harmonic polynomials, the second factor from the right side is zero. If , , then, for , the second factor from the right side is zero by Corollary 6. Finally, if  and , then the second factor is zero in accordance with the scheme for constructing polynomials .
By Lemma 2 from ([
29], p. 33), the obtained system (
29) of functions is complete in 
 because the system 
 is orthogonal and complete in 
 for each 
m, and the system 
 is orthogonal and complete in 
 for different 
 and 
k. The theorem is proved.    □
 Example 7. Let , , ,  then Problem  has the form Using Example 6, we find the eigenfunctions of the problem (1) and (2). In the polar coordinate system, the eigenfunctions of the problem (16) and (17) are determined according to the equality (30) in the formwhere  is a root of the Bessel function  In the written formula, the dependence of the eigenfunction  on the index k is not indicated because, in accordance with Example 6, the dimension of the space  is equal to 1. According to (25) and taking into account (26), we writewhere ,  is a root of the Bessel function  and . It is clear that the obtained system of eigenfunctions is complete in . Let . Then, the maximal system of homogeneous harmonic polynomials of degree m in (30)  has the form [27]where  and The system (32) has  members for every m because . In the spherical coordinate system  we can write it in a more compact waywheresince ,  and . In this case, the dimension of the space  is greater than 1. Note that the operator  acts only on the second multiplier of polynomials in (32). For example, in the space , one can choose the following basic polynomials , which means The remaining eigenfunctions are obtained similarly.
   5. Conclusions
The results obtained allow one to find explicitly, using the formula (
29), the eigenfunctions and eigenvalues of the boundary value problem (
1) and (2) for the nonlocal differential equation with double involution. The completeness of the system of eigenfunctions make it possible to use the Fourier method to construct solutions of initial-boundary value problems for nonlocal parabolic and hyperbolic equations.
Possible applications of the obtained results can be found in the modeling of optical systems, since differential equations with involution are an important part of the general theory of functional differential equations, which has numerous applications in optics. Applications of equations with involution in modeling optical systems are given, for example, in [
30,
31]. In particular, in [
30], mathematical models important for applications in nonlinear optics are considered in the form of nonlinear functional-differential equations of a parabolic type with feedback and transformation of spatial variables, which is specified by the involution operator. The following parabolic functional differential equation is considered
      
      which describes the dynamics of the phase modulation of a light wave, in an optical system with an involution operator 
Q such that 
.
With the above in mind, as further research steps on the topic of the presented article, we are going to investigate nonlocal initial-boundary value problems with involution for parabolic equations. In addition, we are going to study nonlocal boundary value problems in the case of multiple involution of arbitrary orders, generalizing the results obtained in [
24], and also consider similar boundary value problems for a nonlocal biharmonic equation.