Next Article in Journal
Three-Dimensional Model for Bioventing: Mathematical Solution, Calibration and Validation
Previous Article in Journal
On the Parallelization of Square-Root Vélu’s Formulas
Previous Article in Special Issue
Lie Symmetry Classification, Optimal System, and Conservation Laws of Damped Klein–Gordon Equation with Power Law Non-Linearity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complex Connections between Symmetry and Singularity Analysis

Pakistan Academy of Sciences, 3 Constitution Avenue, G-5/2, Islamabad 44000, Pakistan
Math. Comput. Appl. 2024, 29(1), 15; https://doi.org/10.3390/mca29010015
Submission received: 23 October 2023 / Revised: 14 February 2024 / Accepted: 16 February 2024 / Published: 19 February 2024
(This article belongs to the Special Issue Symmetry Methods for Solving Differential Equations)

Abstract

:
In this paper, it is noted that three apparently disparate areas of mathematics—singularity analysis, complex symmetry analysis and the distributional representation of special functions—have a basic commonality in the underlying methods used. The insights obtained from the first of these provides a much-needed explanation for the effectiveness of the latter two. The consequent explanations are provided in the form of two theorems and their corollaries.

1. Introduction

The shortest path between two truths in the real domain passes through the complex domain—Jacques Hadamard (1991)
Methods to solve linear ordinary differential equations (ODEs) were developed soon after differential calculus. However, solving nonlinear differential equations (DEs) was limited to some special cases, and no general methods were available until Sophus Lie and Paul Painlevé provided some generality by very different approaches. Lie [1] attempted to use the methods of Abel and Galois to resolve the issue of the solution of polynomial equations by means of radicals in order to solve DEs. Painlevé [2,3] extended the methods of Frobenius [4] to solve second-order, linear ODEs about regular, singular points to deal with more DEs. His basic new input was to take the dependent and independent variables to lie in the complex domain,   I C and allow the singularities to move off the real axis, I R .
Whereas Painlevé’s methods had to be considered on a case-by-case basis, using Lie’s methods, each symmetry can be used to reduce the number of variables in partial differential equations (PDEs) or reduce the order of the equations, regardless of whether they are linear or not. As such, if there are enough symmetries available, the DE can be reduced from partial to ordinary, then solved by using one symmetry at a time to reduce the order down to zero. The key question of what would be “enough symmetries” was answered by Lie and others by providing general criteria. Of course, Lie’s methods cannot be applied if the equations do not have enough symmetries.
There have been various developments in symmetry analysis contributed by Lie and others that I will not go into at present. However, while Lie had assumed that the independent and dependent variables are complex, he never made explicit use of this fact. Of course, as Ali, Mahomed and Qadir (AMQ) [5,6] pointed out, the dependent variables must then be complex differentiable and, hence, complex analytic, thus satisfying the Cauchy–Riemann equations (CREs). The CREs have to be incorporated into the system of equations, so the symmetry structure is changed. It was shown [7] that if some criteria are met, there is a correspondence between two-dimensional systems of real ODEs and scalar equations of a complex dependent variable that depend on one real variable. This correspondence led [8] to solutions of two real dimensional systems having fewer symmetries than are required for symmetry solutions of the systems—including no symmetry! However, it was not really clear how and why the complex procedure can provide the dramatic results it does. To be able to use all three techniques to obtain results, it is essential to obtain criteria to apply them generally. It is hoped that by identifying the commonality of the three methods, it will be possible to formulate such criteria.
The plan of the paper is as follows. In the next section, a very brief review of the relevant, salient points of singularity analysis is provided. In Section 3, symmetry analysis and the complex methods are presented, followed, in the subsequent section, by a review of the singular representation of some special functions. In Section 5 the problem of defining a complex variational principle and its resolution are discussed. In Section 6 the complex connection between them is identified and the results are stated in the form of two theorems, each with two corollaries. A brief discussion and conclusion is given in the last section.

2. Review of Painlevé and Singularity Analysis

The power series method to solve linear ODEs uses a term-by-term cancellation of a power series with arbitrary coefficients. The cancellation imposes enough constraints on the coefficients so that the number of arbitrary coefficients equals the order of the ODE (n). The series converges if the points are regular, i.e., the coefficients of the ODE do not diverge. If some coefficients diverge as some point ( x 0 ), it is said to be singular. Writing the ODE as Σ i = 0 n P i ( x ) y n i ( x ) = 0 , with P 0 = 1 , and assuming that the singular behaviour of the function can be approximated by α ( x x 0 ) p , where p < 0 , Frobenius [4] extended the method to those singular points at which ( x x 0 ) n i P i ( x ) is regular. Such points are called regular, singular points. Staying in the real context, p could even be a fraction, provided care is taken to approach x 0 only from above. The series generically converges in some restricted domain.
Painlevé [2] took the natural next step of converting to the complex domain, writing the ODE as Σ i = 0 n P i ( z ) w n i ( z ) = 0 . The singular point being forced on one by the ODE in the real domain is called a “fixed singularity”. On the other hand, in the complex domain, it is possible to search for singularities, so they are called “movable singularities”. Notice that the restriction of approaching the singular point ( z 0 ) no longer applies. For example, while 1 / ( x 2 + a 2 ) is not singular anywhere, 1 / ( z 2 + a 2 )  is singular at ± ι a . As such, there could be a number of singular points to expand the series about. Consequently, new solutions could be sought about each separate movable singularity. This opens the door to many more solutions than could exist in the real domain.
To follow Painlevé’s procedure and its similarity to and difference from the complex methods of AMQ, it is worth looking at a simple illustrative example (given by Rammani, Grammaticos and Bontis (RGB) [9]). Consider the following general first-order, nonlinear ODE:
y ( x ) = f ( x , y ) ,
and write it in parametric form in terms of the complex variable (z): as
d x / d z = u ( x , y ) , d y / d z = v ( x , y ) .
Painlevé assumed (what is now called the Painlevé property) that there are only poles in f when looked at in the complex plane, (not like e 1 / z , for example). Let this ODE be singular at some z 0 and retain only the dominant terms for x and y. Writing z z 0 = τ , x = a τ p , y = b τ q ( p < 0 ) , there are four arbitrary parameters, ( p , q , a , b ) to be determined. The parameters have to satisfy some constraints for the ODE to hold near the singularity. As such, though there is some freedom of choice in the values of the parameters, it is not total. One of the remaining free parameters is needed for the choice of the movable singularity.
Making the example more concrete, in Equation (2), take
u ( x , y ) = x ( k x y ) , v ( x , y ) = y ( x 1 ) ,
where k is a given constant in the ODE. As it is a 2D system of first-order ODEs, there should be two free constants: one to locate the moving singularity and the second to give the arbitrary constant. Putting the leading terms of Equation (3) into Equation (2), there are two distinct cases: either (i) q > p or (ii) q = p . In the first case, the τ q term becomes irrelevant, and the first equation gives p = 1 , a = 1 , with no constraint on b; and in the second case, they give the same p, but here, a = 1 , and b = 2 so that the contributions of x and y in the first equation cancel. Notice that there is only one relevant, arbitrary constant to determine the position of the movable singularity in either case, which also has to be the constant of integration. Hence, the position is not determined but chosen.
Using the Laurent series to cancel the next-to-leading terms for for x and y, c τ p + 1 and d τ q + 1 , in case (i), c = k ; with no constraint on d, which is then the second constant; and in case (ii), 2 d c = k , Again, we have the second constant. Instead of the Laurent series, Painlevé’s procedure for the next term puts
x = a τ p ( 1 + c τ r ) , y = b τ q ( 1 + d τ r ) ( r > 0 ) .
The leading terms are already cancelled, so one retains only the coefficients of the derivative of τ r , which means that the new leading term is linear in r. The new terms are called “resonances”. Retaining only these new leading terms (as the first ones had already cancelled out) yields a matrix equation, Q.C = 0, where C is the vector with components c and d. Therefore, det ( Q ) = 0 . For case (i), this yields r = 1 , 0 . The first root does not satisfy the requirement that r > 0 . This root is always present for autonomous systems and is not a resonance. It simply corresponds to the arbitrary constant for the first-order ODE. The second root does not alter the original solution, so it is trivial and does not give anything new. For case (ii), the values are r = 1 , 2 . Here, the second root is non-trivial. In this case, Equation (4) reduces to
x = τ 1 ( 1 + c τ 2 ) , y = 2 τ 1 ( 1 + d τ 3 ) .
If r is fractional, one obtains branch points. However, it may be possible to find suitable transformations of variables in the case of fractional r to make it an integer in the transformed equation. Also, one has not actually located the movable singularity in this example, as it is a free choice that corresponds to the integration constant. This is because it is an autonomous system, and no position is selected by the equation, as occurred in case (ii) for the term k. If it is not autonomous, it may be that, as with the Cauchy–Euler equations, a similarity transformation may be able to reduce it to autonomous, as was done by Paliathanasis, Taves and Leach [10]. Otherwise, one might be able to deal with non-autonomous ODEs by some other transformation of variables.
The Painlevé procedure gives only an approximate solution near the movable singularity. One could now develop a power series solution by using this as an extension of Frobenius’ method. Of course, there is no reason to restrict the system to two variables. In the example, the limitation came only because one started with a first-order scalar ODE. The procedure could be used for any n-dimensional system of ODEs, and the same search for poles and resonances could be carried out. For first-order systems it turns out that the only one is the Riccatti system, which has a simple pole and can be solved more easily by transformation of variables. One can also proceed to higher-order ODEs. It is found that there are fifty scalar ODEs with the Painlevé property of having only movable-pole singularities (see, for example, Ref. [11]). For higher-order ODEs, there is no complete classification, and it is a matter of trial and error to find ODEs with only movable-pole singularities. The purpose of this section is not to provide a primer for Painlevé analysis but to bring out the fact that it can provide solutions where other methods do not seem to work and to highlight the key required ingredient of movable-pole singularities for the system to be solvable. Of course, PDEs are not excluded, but they can only be solved by reducing them to ODEs, as is done by using transformation of variables.
It is of special interest to note that Painlevé analysis is also useful for integrating Hamiltonian systems. A Hamiltonian system,
d q i / d t = H / p i , d p i / d t = H / q i ( i = 1 N ) ,
where H [ q i ( t ) , p i ( t ) ] is the Hamiltonian, gives the dynamical evolution of the system. It is said to be Liouville integrable if there exist N constants of the motion relating the 2 N dependent variables, I i ( q , p ) = C i , such that { I i , I j } : = 0 , where { , } is the Poisson bracket defined by { A , B } = A / q i B / p i A / p i B / q i , using the Einstein summation convention, i.e., that repeated indices are summed over.
If there exists a generating functional, S ( q , p ) , called the action, which is the time integral of the Lagrangian over a given time interval, Liouville’s theorem guarantees the integrability of the system. (For completeness, I should mention that the Lagrangian is a quantity that is to be minimized over a time interval by selecting q ( t ) , q ˙ ( t ) for this purpose. The optimality conditions give the Euler–Lagrange equations, which show that the Hamiltonian is a conserved quantity.) In that case the system is said to be algebraically integrable, and the N solutions correspond to N-d real tori. Even if the system is not algebraically integrable, one may still be able to find solutions by using the complex domain. The solutions with real time involved polynomial functions, but for complex time, now, the functions can be rational, and one can use Painlevé analysis. However, the solution space is no longer the earlier tori, as the tori are no longer real. Not only that, but they are no longer tori, as the space becomes non-compact. The question of complex time brings one to the use of complex Hamiltonians to solve problems of atomic physics [12], as the Hamiltonian corresponds to time translations. The original use had been in the context of symmetry analysis, and this is directly related to complex methods in symmetry analysis.

3. Review of Symmetry Analysis and Complex Methods

An object is said to be symmetric with respect to some operation if it remains invariant under the operation. For algebraic expressions of many variables, it means invariance under interchange of those variables. For geometrical objects, the operations can be translation, reflection, rotation or re-scaling. For DEs, the transformations must be not only continuous but also adequately differentiable. If the independent variable is complex, differentiability in a region guarantees complex analyticity. As such, Lie [1] assumed the variables to be complex but did not make explicit use of that analyticity. To start with, consider only scalar n t h -order ODEs, E ( x , y ; y , , y ( n ) ) = 0 . Regarding the independent and dependent variables as giving a point in a 2D space, Lie point transformations correspond to infinitesimal changes in the positions of the points of the space. Thus, the operator can be represented as a vector field in the tangent space at that point, X = ξ ( x , y ) / x + η ( x , y ) / y . To be able to apply it to the DE, the space needs to be enlarged or prolonged to an ( n + 2 ) -D so-called “jet space”. The corresponding prolonged symmetry generator is
X [ n ] = ξ ( x , y ) x + η ( x , y ) y + η 1 ( x , y ; y ) y + + η n ( x , y ; y , y ( n ) ) y ( n ) .
To obtain the coefficients of the generator, one has to write the transformed coordinates as a series expansion of a small parameter. The coefficient of the linear term for x is the required ξ , and that of y is the relevant η . For the transformed variables, y = d y / d x gives the tangency condition that η = d ξ / d x = ξ , x + y ξ , y . The values of the other η s are obtained correspondingly. The ODE, E, is said to admit the symmetry generator, X , if
X [ n ] E | E = 0 = 0 ,
by which is meant that the generator acting on the algebraic function appearing on the left side of the equation annihilates it for solutions of the equation.
There are various methods available for reducing the number of variables of the DE or reducing its order by using a symmetry. Perhaps the simplest is the construction of differential invariants. These are expressions involving the variables in the jet space, barring the highest-order terms, that remain constant under the symmetry. These can be used to write one of the highest derivatives in terms of the other variables, thereby reducing the number of variables or the order of the DE. Criteria for complete solvability are given in terms of what is called a “group classification”. Since the symmetry is only identified locally, the classification is actually of the Lie algebra of the symmetry generators of the DE. Lie showed that the generators form a basis of vector fields for the tangent space of the solutions of the DE and, hence, satisfy a set of commutator relations, [ X i [ n ] , X j [ n ] ] = C i j k X k [ n ] , where C i , j k are called structure constants. Clearly, the algebra is characterised by the complete set of structure constants.
To bring out the difference between the local symmetry of the algebra and the global symmetry of the group, consider the symmetry generators of the Euclidean plane, s o ( 2 ) s I R 2 , where s o ( 2 ) is the generator of the rotation of the plane, each I R is a translation and s is the semidirect product, denoting that the product is non-commuting. Now imagine the plane wrapped into a cylinder. The symmetry of the cylinder is s o ( 2 ) I R , where one of the translations has become a rotation about the axis of the cylinder and the original rotation is lost, as it yields a tilting of the cylindrical axis. Imagine a little square pinned to the cylinder at some point. Extended, this would be a tangent plane to the cylinder at that point and would possess the symmetries of the plane, but the cylinder would not possess all its symmetries. The Lie group for the cylinder is, thus, S O ( 2 ) I R , but that of the plane is S O ( 2 ) s I R 2 . The DE generators lie on the little square, which possesses all its symmetries, so the symmetries of the algebra are s o ( 2 ) s I R 2 . If one wants to extend the solution well beyond the original point and, in fact, allow it to close up if it is compact, one needs other methods. In the complex domain, it leads to a change of the topology, which has to be dealt with. Nevertheless, one has compact and non-compact complex Lie groups. This is of relevance for the connection between complex methods for symmetry analysis and singularity analysis.
Of particular relevance for our purposes is the method of transforming the independent and dependent variables to transform DEs to a linear form, called linearization, thereby yielding their exact solutions. While all scalar first-order ODEs can be so transformed by Lie point transformations, this does not hold, even for second-order ODEs. Lie proved that scalar second-order ODEs are linearizable (see, for example, [13]) only if they have eight symmetry generators. For the linearizability of scalar n t h -order ODEs ( n 3 ) , there are three classes with ( n + 1 ) , ( n + 2 ) and ( n + 4 ) generators, respectively [14]. For m-D systems of second-order ODEs, there are 2 m classes, with ( 2 m + 1 ) , , 4 m generators, then one class with ( 2 m ) 2 1 generators [15,16]. For systems of higher-order ODEs, the formula is obtained by putting the two together. For PDEs, the situation is more complicated and not relevant for the present purposes.
It was shown that there is a direct connection between geometric symmetries and systems of second-order ODEs [17,18,19], as the ODEs satisfied by the shortest paths between two points, i.e., geodesics, are second-order nonlinear systems. Specifically, the directions along which the metric tensor of the underlying space, g a b ( x c ) , is invariant, called isometries, are the symmetries of the following geodesic equations:
x ¨ a + Γ b c a x ˙ b x ˙ c = 0 ,
where x ˙ a = d x a / d s , s being the arc length parameter, and
Γ b c a = 1 2 g a d ( g b d , c + g c d , b g b c , d ) ,
is called the Christoffel symbol, where g a d is the inverse metric tensor and the subscript comma denotes partial differentiation relative to the position vector.
Upon using the translational invariance of the geodesic equations with respect to s, one can project the m-D system down to one of ( m 1 ) -D. While the original ODEs are quadratically semi-linear, the projected system is cubically semi-linear. Lie found that all his linearizable systems had to be cubically semi-linear and satisfy four first-order differential constraints involving two arbitrary functions. Tressé [20] eliminated the arbitrary functions and reduced the number to two by increasing the order of the constraints to two. It is natural to expect that since a flat space has straight-line geodesics, the condition for linearizability would be that the curvature tensor for the manifold containing the geodesics be zero. That this is a sufficient condition for linearizability was proven [21]. Since for a scalar second-order ODE, the linearizability is unique, in that case, it is also a necessary condition, but there is no reason for it to be unique for higher orders or higher-dimensional systems. This issue was dealt with more generally in [22], and it was found that the Lie conditions came out automatically in projecting from 2D down to the scalar case due to the coordinate freedom resulting in a free choice of two Christoffel symbols. The n-D generalization of the condition comes out automatically.
A code was constructed to convert the coefficients of the cubically semi-linear ODE to Christoffel symbols and, thereby, generate the metric tensor corresponding to the ODE [23]. An algorithm developed to write the metric of a flat space in Cartesian coordinates [24] can, thus, be used to directly obtain the linearizing transformation and, hence, to linearize the system. This was called “geometric linearization” [25]. It only yields linearizable systems with maximum symmetry ( s l ( n + 2 , I R ) ). For n = 1 , this would be the only linearizable system, but even for n = 2 , there are five classes with 5 , 6 , 7 , 8 or 15 infinitesimal symmetry generators, of which only the last is obtained geometrically; hence the solution can be obtained by using codes. It would be most desirable to be able to use the power of the geometric method for the other classes and, perhaps, learn why they arose (much as one learned where the Lie conditions came from).
It is noted that explicit use of the analyticity of the dependent variable assumed by Lie would entail an apparent paradox. Splitting the variables into their real and imaginary parts would double the number of variables and generators. Also, a complex two-dimensional space corresponds to a real four-dimensional space. The maximum number of symmetry generators for the real four-dimensional space is 15, while the maximum number of complex generators is 8, the splitting of which should yield 16 and not 15 generators. Omitting any one of the 8, the split system has only 14 generators instead of 15. The resolution of the paradox is that the system has to incorporate the system of Cauchy–Riemann equations and so the set of symmetries is modified [5,6]. This explicit use of analyticity for the complex domain is called “complex symmetry analysis” (CSA).
A complex ODE, called a CODE, can be split into its real and imaginary parts by writing the independent and dependent variables in terms of their real and imaginary parts. Thus, for example, a scalar CODE splits into a pair of two PDEs of two variables. More generally, if it is an n-D system of ODEs there will be a system of 2 n -D PDEs of two independent variables. To obtain ODEs from the CODE, one can now require that the independent variable be restricted to being real, while the dependent variable is complex. In this case, one obtains a system of 2 n real ODEs, called RODEs. While for every CODE, there is a system of RODEs, the converse is not true. However, criteria have been formulated to determine when a system of RODEs corresponds to a CODE.
If the CODE is linearizable, it is not necessary that the corresponding system of RODEs be linearizable. Of course, we can obtain the complete solution of an m t h -order linearizable n-D CODE involving n m arbitrary constants, but the corresponding system of RODEs may not be linearizable; hence, its solution may not involve the corresponding 2 n m constants in its solution. The use of the correspondence between linearizable CODEs and systems of RODEs to solve the problems for the systems of RODEs is called complex linearization. It yielded two of the four missing linearizable classes mentioned above [7,26,27], but the other two did not turn up there. Furthermore, there were nonlinearizable RODEs corresponding to linearizable CODEs. Nevertheless, the procedure did provide solutions to RODEs with insufficient symmetries for the purpose, as well as even those that had no symmetries. The question following remained: “Why, and when, does complex linearization provide the results mentioned here?”.

4. The Distributional Representation of Special Functions

Another development in CSA needed a formal development to make it rigorous, but at the time, it had been pursued without a proper base. That base came subsequently with some work in a totally different field: that of the so-called “special functions”. These functions are normally thought of in the context of solutions of Sturm–Liouville systems, which are second-order linear ODEs with different types of boundary or initial conditions. To that extent, they seem to fit in the broad area of DEs. However, there is the gamma function that is not of this type, and, even more, there is the Riemann zeta function that arises in the theory of prime numbers—a long cry from DEs. The formal development arose in working with delta functions with a complex argument. To explain the context, a brief explanation of different representations of special functions is provided first; then, the discussion carries on to the problem of dealing with delta functions of a complex variable.
What exactly is a function? It can be represented in different ways: by an algorithm, algebraically, in tabular form, graphically, etc. For example, the factorial function n ! = n ( n 1 ) 2.1 is well defined for natural numbers ( I N ). Although 0 ! makes no sense, for consistency of notation in combinatorials,   n C r = n ! / r ! ( n r ) ! is defined as 1, thus adjoining 0 to the domain. In this form, there is no graphical representation. Defining it as an integral, one can extend the domain from ( I N { 0 } ) to I R { I N } . This is the integral representation of the “generalized factorial” called the gamma function ( Γ ( x + 1 ) ), which can now also be represented graphically with infinitely many discontinuities at the negative integers. Extending the domain to   I C { I N } , the infinite discontinuities convert to simple poles, and the domain becomes connected. With this domain, it can also be regarded as an integral (Mellin) transform of e t , i.e.,
Γ ( s ) = M [ e t ; s ] : = 0 e t t s 1 d t , ( R ( s ) > 0 ) ,
which can then be analytically continued to the entire complex plane (with poles at the negative integers). This is an integral transform representation. Throughout, it remains the same factorial function or its generalizations/extensions.
Of particular interest is the Fourier transform representation (FTR) because it is easy to obtain its inverse transform, unlike most other transforms, such as Laplace or Mellin transforms. Fourier and inverse Fourier transforms are defined (respectively) by
F ( k ) = F [ f ( x ) ] : = + e 2 π ι k x f ( x ) d x ,
and
f ( x ) = F 1 [ F ( k ) ] : = + e 2 π ι k x F ( k ) d k .
At the base, there is an image of a “true function” laid out in some Platonic heaven, whose shadows are seen by its representations. (Even the name “representation” evokes this image.) Think of it in geometrical terms as the “function” defined in some manifold in the sky, and we only see its coordinate form down on Earth. However, for the “distributional representation” to be discussed, this image no longer applies. The function has to descend from the sky and get its hands dirty by acting on other functions. It is no longer be the usual function but what is called a generalized function or a functional. A functional is normally defined as a mapping from the space of functions to that of real numbers. More generally, it is the mapping from a space of functions to the space of functions. The difference this distinction makes is of its cardinality. For transfinite numbers, the countable set ( I N ) has a cardinality of 0 , and, assuming the continuum hypothesis, I R is the power set of the natural numbers, with a cardinality of 1 = 2 0 . The space of all functions ( I F ) can then be regarded as the power set of I R (see, for example, [28]) with a cardinality of 2 = 2 1 , and the space of generalized functions ( I G ) has a cardinality of 3 = 2 2 .
The generalized function or distribution is defined by its action on “test functions” that belong to a class ( C I F ) of “well-behaved” functions over a compact support (K) (see, for example, [29]). More specifically, they are defined by the inner product of the distribution with a test function over the compact support. Of course, all members of C would, themselves, be distributions. However, C also includes, for example, the Heaviside step function ( Θ ( x x 0 )) and the Dirac delta function ( δ ( x x 0 )), which are not functions in the usual sense. Thus, for any ϕ ( x ), Θ ( x x 0 ) is defined by
< Θ ( x x 0 ) , ϕ ( x ) > : = ϕ ( x ) | x > x 0 , x ϵ K ,
where the “>” is used in the sense of Pareto, i.e., each component of one vector is greater than those of the other. Similarly, δ ( x x 0 ) is defined by
< δ ( x x 0 ) , ϕ ( x ) > : = ϕ ( x 0 ) ,
provided x 0 ϵ K and 0 otherwise.
The distributional representation [30] of a function expresses it as a series of distributions, i.e., as a linear combination of a countable sequence of distributions over the field ( C ). It originally arose in the use of the FTR for Γ ( x ) . Since the FTR must necessarily be complex, one uses Γ ( z ) instead, which has simple pole singularities on the negative real axis, so the Cauchy integral formula yields a series of delta functions. In this guise, it can be seen as an operator that acts on any test function by the inner product defined by an integral over the imaginary part of z. By taking the test function to be Γ ( z ) ¯ , new identities are obtained, yielding the “norm square” ( | Γ ( z ) | 2 = π 2 1 2 x Γ ( 2 x ) ) and “norm fourth” ( | Γ ( z ) | 2 = 2 π Γ 4 ( 2 x ) / Γ ( 4 x ) ). There are numerous other very useful formulae obtained with this representation. (When presented at a conference, the identities created quite a stir [31].) It has also led to new identities for the Riemann zeta function (RZF) and its family [32,33], including ones for a series of Dirichlet η and Λ functions and even of Bernoulli numbers!

5. The Complex Variational Principle

The variational problem is to find the choice of objective functions (of the dependent variables and their derivatives) that minimize their integral over a given interval (or domain) of the independent variable(s), subject to some constraints. The functional that is minimized is called the action. That nature (or people) follows the resulting “path” in physical (or economic) applications is called the “principle of least action”. Using the method of Lagrange multipliers with the constraint functions, the resulting objective function is called the Lagrangian. The necessary condition for the extremal is that the variation of the functional (the action integral) be zero. This is the variational principle. Assuming that the Lagrangian depends only on the first derivative, there is a dual formulation that replaces the first derivative of the dependent variable(s) by a (or several) conjugate variable(s) and leads to the conservation of the dual to the Lagrangian, called the Hamiltonian function. In either way of looking at the problem, one needs an extremal value. Since   I C is not an ordered set, this entails the requirement that the functional and, hence, the Lagrangian and Hamiltonian lie in I R . Of course, one could take the absolute value of the “complex functional”, but that does not solve the original problem.
Originally, the results of variational calculus were extended to the complex domain [6,34], without bothering with the corresponding rigorous extension of functionals. A crucial point is that for the extension for ODEs, one restricts the independent variable to I R . One needs to interpret the two components for physical and economic applications. This was done in a physical application [35]. It turns out that there has been work done on bi-Hamiltonian systems (see, for example, [36]) and even on “non-observable (i.e., non-Hermitian) Hamiltonians” [12], which are needed to explain some atomic spectra. For the economic example, there may be two independent objective functions to be minimized, subject to two sets of constraints in which neither is given greater importance over the other. In this sense, each of the Lagrangians acts like a constraint for the other. It would be interesting to find some actual economic applications of the “bi-Lagrangian”. There remained the problem of a rigorous extension of the functional from of I R to   I C .
The rigorous formulation came from work on special functions arising from the distributional representation [37,38]. Notice that the space of distributions is not an inner product space, as it is defined by its action on “well-behaved functions” and not on other distributions. For example, δ 2 ( x ) is not a distribution and has no clear definition as one. Thus the usual norm is also not defined for distributions. Essentially, this is the problem with defining a complex Lagrangian. For this purpose, it is necessary to use a generalization of the distribution, called an ultradistribution [39]. Ultradistributions are objects that act on distributions to produce distributions. The basis for this comes from the process of iterative integration, which may be regarded as a “negative-order” differentiation [40]. Even δ ( x ) is not directly defined as a distribution, but it can be evaluated by integration by parts when multiplied by a test function. In effect, it acts as a derivative of the test function evaluated at zero (with sign flipping). (It is worth pointing out that the space of higher-order ultradistributions has a correspondingly higher-order transfinite cardinality.) The net result is that we can rigorously deal with the bi-Lagrangian or bi-Hamiltonian as a complex Lagrangian or an “unobservable” Hamiltonian.

6. The Complex Connection

Although complex numbers were introduced by Cardano during 1501–15, they were introduced as variables for functions by Euler (1707–1783) (see [41]). This not only extended the domain for the functions considered, but it changed the very conception of functions. In particular, it led to the study of the nature of singularities of functions and their use for evaluating contour integrals [42]. Thus, when Lie was developing his methods for solving DEs [1], he automatically took the variables and the functions to lie in the complex domain without adverting to its significance or concern with the singularities of the functions. Although Frobenius used the method of expanding about regular singular points [4], he ignored the possibility of complex variables and exploiting their use. It was only when Painlevé used complex variables and exploited the singular behaviour of the functions satisfying the DEs [2] that the full power of complex analysis could be used to solve them. What Euler did for the theory of functions and Painlevé did for Frobenius, the “complex methods” [5] try to do for Lie. As such, there should be much wisdom to be found by looking at those methods from the perspective of the earlier two developments.
The most obvious connection is the fact that the complex methods led to the solution of systems of RODEs by their correspondence to split CODEs, even though the system of RODEs was not linearizable. The question asked earlier was, how that could be. Although the question was pondered, at the time, no answer was forthcoming. However, the glimmering of an answer lay in the requirement that when the CODE is split, to obtain a real system, the independent variable has to remain real, as otherwise, the resultant system would be of PDEs. As such, it is necessary that the functions in the CODE be real analytic, i.e., analytic on the real axis, which must lie in the complex plane off the real axis. If there are no singularities, as seen by recalling singularity analysis, there is no extra solution provided. This leads to the following theorem.
Theorem 1.
To give linearizable RODEs, the CODE must be real analytic, i.e., have enough singularities only in the real part of the domain where complex methods are applied and none outside it; and to provide solutions of the RODEs not available by classical methods, it must have movable singularities in that domain.
Corollary 1.
If the CODE has enough singularities in the real domain for linearization and has some movable singularities, some of the other linearizable classes corresponding to the other movable singularities are obtained.
Corollary 2.
For the solution of the RODEs to be global, the corresponding domains must be the entire I R n and I C n .
One now needs to see to what extent singularity analysis can help with understanding the “distributional representation” of special functions (as defined by Chaudhry and Qadir). Recalling that the space of distributions is not an inner product space, any inner product or norm can only be defined in some sense. Such an “inner product” was defined for the distributional representation involving a series of delta functions to be summed up over the imaginary part of the independent variable. Obviously, this requires that there be movable singularities in the differential equation defining the special function, if any exist.
For this purpose, I give an example of the use of the DR to obtain new identities for the RZF. It is defined by its integral representation [43]:
ζ ( s ) = 0 t s 1 ( 1 e t ) 1 e t d t , ( R e ( s ) > 1 ) ,
which is then analytically continued to cover most of the complex plane. Using its Fourier transform representation leads to the following distributional representation [32,33]:
ζ ( σ + ι τ ) = 2 π Γ ( σ + ι τ ) l = 0 n = 0 m = 0 ( 1 ) l + m n m l ! m ! δ ( ι τ + ( σ + l + m ) ) .
As explained earlier, this representation is only meaningful as an operator acting on a test function by integration over τ . In particular, it can act on the gamma function to yield the following identity:
Γ ( s ) , ζ ( s ) = 2 π m = 0 ( 1 ) m m ! ζ ( m ) .
Taking the DR acting on the zeta function itself gives a norm in some sense.
ζ ( σ + i τ ) τ 2 = 2 π m = 0 n = 0 k = 0 ( 1 ) m + k n k m ! k ! ζ ( 2 σ + k + m ) .
This “norm” is defined by integration over the imaginary part of the independent variable ( τ ), defined with the gamma function as a weight, leaving it as a function of the real part ( σ ), so it is not quite a norm in the usual sense. Call it a “ τ -norm”.
A norm is generally defined for the special functions that are solutions of a Sturm–Liouville (SL) system with respect to a weight function. Treating the real part of the dependent variable as a parameter, in this case, the SL system is,
[ Γ ( σ + ι τ ) ζ , τ ( σ + ι τ ) ] , τ + [ q ( σ + ι τ ) + λ p ( σ + ι τ ) ] ζ ( σ + ι τ ) = 0 ,
where “   , τ ” means “the derivative with respect to τ ”, and λ ϵ I C is the eigenvalue parameter that labels the solutions. Since p, q and λ can be arbitrarily chosen (not necessarily uniquely) by using the properties of the RZF, this gives a second-order ODE for the RZF in terms of the imaginary part of its independent variable. This unifies the RZF with other special functions of mathematical physics!
Here, then, is an example of how the complex connection can lead to new insights and results for special functions:
Theorem 2.
If there is a DR for any special function, then there is a corresponding second-order ODE in terms of the imaginary part of the dependent variable with respect to the weight function appearing in the representation.
Corollary 3.
A τ norm can be defined for any function with a DR.
Corollary 4.
A second-order ODE for the gamma function in the τ sense can be obtained.

7. Conclusions and Discussion

We have seen that CSA and special functions have benefited from the insights obtained from singularity analysis. One could look for the reverse payback of those two to singularity analysis. There are 50 Painlevé types of second-order ODEs, of which 44 were solved using Lie symmetries, by linearization or quadrature, while the remaining 6 needed new transcendental functions to solve them [9]. To that extent, the payback is already there. It is possible that CSA methods using contact or higher-order symmetries could shed further light on those six types. Furthermore, CSA has been used for higher-order ODEs [44,45], so the Painlevé-type higher-order ODEs could be investigated using CSA. There is little more to be said here about the use of singularity analysis for CSA, as the new results of CSA come directly from it, and examples of its use are given in the literature cited above.
Coming to the use of DRs of special functions for singularity analysis, if they lead to DEs, those DEs will necessarily have movable singularities and, hence, lie among the Painlevé types. Identifying those Painlevé equations, the well-known properties of the special functions can then be used for those Painlevé types. Furthermore, in the case of the RZF, it was pointed out that the equation would be obtainable for it as a function of the real part of its independent variable. However, since it would come from a movable singularity for the presumed ODE, there must be a sort of “dual” equation for the imaginary part. Furthermore, it should be possible for the two to be put together to give a second-order CODE for the special function of the full complex variable. This would be a line worth investigating—not only for the RZF but also for other functions that have not already been obtained by solving ODEs.

Funding

The research received no funding.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

I am very grateful to Sergey Meleshko and the organizers of Symmetry 2022 for inviting me, where I learned of symmetry analysis and realized its consequences for symmetry analysis and special functions. I am also very grateful to Fazal Mahomed and Andronikos Paliathanasis for extremely useful comments and suggestions and for a very thorough reading of a draft of the paper through various stages of development.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Lie, S. Klassification und integration von gewönlichen differentialgleichungen zwischen x, y, die eine gruppe von transformationen gestaten. Arch. Math 1883. In Lectures on Differential Equations with Known Infinitesimal Transformations, Lie’s Lectures by G. Sheffers; Teubner: Leipzig, Germany, 1891. (In German) [Google Scholar]
  2. Painlevé, P. Mémoire sur les équations différentielles dont l’intégrale. Bull. Soc. Math. Fr. 1900, 28, 201–261. [Google Scholar] [CrossRef]
  3. Painlevé, P. Sur les équations différentielles du second ordre et d’ordre supérieur dont l’intégrale générale est uniforme. Acta Math. 1902, 25, 1–85. [Google Scholar] [CrossRef]
  4. Frobenius, G. Ueber die Integration der linearen Differentialgleichungen durch Reihen. J. Für Reine Angew. Math. 1873, 76, 214–235. [Google Scholar]
  5. Ali, S.; Mahomed, F.M.; Qadir, A. Complex Lie symmetries for scalar second-order ordinary differential equations. Nonlinear Anal. Real World Appl. 2009, 10, 13335–13344. [Google Scholar] [CrossRef]
  6. Ali, S. Complex Symmetry Analysis. Ph.D. Thesis, National University of Sciences and Technology (NUST), Islamabad, Pakistan, 2009. [Google Scholar]
  7. Safdar, M.; Qadir, A.; Ali, S. Linearizability of systems of ordinary differential equations obtained by complex symmetry analysis. Math. Probl. Eng. 2011, 2011, 171834. [Google Scholar] [CrossRef]
  8. Ali, S.; Safdar, M.; Qadir, A. Linearization from complex Lie point transformations. J. Appl. Math. 2014, 2014, 793247. [Google Scholar] [CrossRef]
  9. Ramani, A.; Grammaticos, B.; Bountist, T. The Painleve Property and Singularity Analysis of Integrable and Non-Integrable Systems. Phys. Rep. 1989, 180, 159–245. [Google Scholar] [CrossRef]
  10. Paliathanasis, A.; Taves, T.; Leach, P.G.L. Integrability of the Einstein-nonlinear SU(2) σ-model in a nontrivial topological sector. Eur. Phys. J. C 2017, 77, 909. [Google Scholar] [CrossRef]
  11. Ince, E.L. Ordinary Differential Equations; Dover Publications: Mineola, NY, USA, 1956. [Google Scholar]
  12. Bender, C.M.; Boetcher, S. Real spectra in non-Hermitian Hamiltonians having PT symmetery. Phys. Rev. Lett. 1998, 80, 5243. [Google Scholar] [CrossRef]
  13. Stephani, H. Differential Equations: Their Solutions Using Symmetries; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  14. Mahomed, F.M.; Leach, P.G.L. Symmetry Lie algebra of nth order ordinary differential equations. J. Math. Anal. Appl. 1990, 151, 80. [Google Scholar] [CrossRef]
  15. Gorringe, V.M.; Leach, P.G.L. Lie point symmetries for systems of 2nd order linear ordinary differential equations. Quaest. Math. 1988, 11, 95. [Google Scholar] [CrossRef]
  16. Soh, C.W.; Mahomed, F.M. Symmetry Breaking for a System of Two Linear Second-Order Ordinary Differential Equation. Nonlinear Dyn. 2000, 22, 121. [Google Scholar]
  17. Aminova, A.V.; Aminov, N.A.-M. Projective geometry of systems of differential equations: General conceptions. Tensor New Ser. 2000, 62, 65. [Google Scholar]
  18. Aminova, A.V.; Aminov, N.A. Projective geometry of systems of second-order differential equations. Sbornik Math. 2006, 197, 951. [Google Scholar] [CrossRef]
  19. Feroze, T.; Mahomed, F.M.; Qadir, A. The connection between isometries and symmetries of geodesic equations of the underlying spaces. Nonliner Dyn. 2006, 45, 65. [Google Scholar] [CrossRef]
  20. Tressé, A. Sur les invariants differentiels des groupes continus de transformations. Acta Math. 1894, 18, 1. [Google Scholar] [CrossRef]
  21. Mahomed, F.M.; Qadir, A. Linearization criteria for a system of second order quadratically semi-linear ordinary differential equations. Nonlinear Dyn. 2007, 48, 417. [Google Scholar] [CrossRef]
  22. Mahomed, F.M.; Qadir, A. Invariant linearization criteria for systems of cubically semi-linear second order ordinary differential equations. J. Nonlinear Math. Phys. 2009, 16, 283–298. [Google Scholar] [CrossRef]
  23. Fredericks, E.; Mahomed, F.M.; Momoniat, E.; Qadir, A. Constructing a space from the system of geodesic equations. Comput. Phys. Commun. 2008, 179, 438. [Google Scholar] [CrossRef]
  24. Bokhari, A.H.; Qadir, A. A prescription for n-dimensional vierbeins. ZAMP 1985, 36, 184. [Google Scholar]
  25. Qadir, A. Geometric linearization of ordinary differential equations. Symmetry Integr. Geom. Methods Appl. (SIGMA) 2007, 3, 103–109. [Google Scholar] [CrossRef]
  26. Safdar, M.; Qadir, A.; Ali, S. Inequivalence of classes of linearizable systems of cubically semi linear ordinary differential equations obtained by real and complex symmetry analysis. Math. Comp. Appl. 2011, 16, 923. [Google Scholar]
  27. Safdar, M. Solvability of Differential Equations by Complex Symmetry Analysis. Ph.D. Thesis, National University of Sciences and Technology (NUST), Islamabad, Pakistan, 2013. [Google Scholar]
  28. Muhammad, N.; Qadir, A.; Khan, I.P. Topology for Beginners; Oxford University Press: Oxford, UK, 2022. [Google Scholar]
  29. Gel’fand, I.M.; Shilov, G.E. Generalized Functions, Vol. 2: Spaces of Fundamental and Generalized Functions; Academic Press: Cambridge, MA, USA, 1964. [Google Scholar]
  30. Chaudhry, M.A.; Qadir, A. Fourier transform and distributional representations of the gamma function leading to some new identities. Int. J. Math. Math. Sci. 2004, 37, 2091–2096. [Google Scholar] [CrossRef]
  31. Qadir, A. The generalization of special functions. Appl. Math. Comput. 2007, 187, 395–402. [Google Scholar] [CrossRef]
  32. Jamshaid, A. Comparison of the Fourier Transform and the Distributional Representation in the Context of the Family of Zeta Functions. Master’s Thesis, Abdus Salam School of Mathematical Sciences, Government College University Lahore (GCUL), Lahore, Pakistan, 2021. [Google Scholar]
  33. Jamshaid, A.; Qadir, A. New identities for the family of zeta functions by using their distributional representations. Math. Comput. Simul. 2023. submitted. [Google Scholar]
  34. Ali, S.; Mahomed, F.M.; Qadir, A. Complex Lie symmetrie for variational problems. J. Nonlinear Math. Phys. 2008, 25, 25. [Google Scholar] [CrossRef]
  35. Farooq, M.U.; Ali, S.; Qadir, A. Invariants of two-dimensional systems via complex Lagrangians with applications. Commun. Nonlinear Sci. Numer. Simul. 2010, 16, 1804. [Google Scholar] [CrossRef]
  36. Blaszak, M. The Theory of Hamiltonian and Bi-Hamiltonian Systems. In Multi-Hamiltonian Theory of Dynamical Systems; Springer: Berlin/Heidelberg, Germany, 1998; pp. 41–85. [Google Scholar]
  37. Tassaddiq, A.; Qadir, A. Fourier transform and distributional representation of the generalized gamma function with some applications. Appl. Math. Comput. 2011, 218, 1084–1088. [Google Scholar] [CrossRef]
  38. Tassaddiq, A. Some Representations of the Extended Fermi–Dirac and Bose–Einstein functions with Applications. Ph.D. Thesis, National University of Sciences and Technology (NUST), Islamabad, Pakistan, 2012. [Google Scholar]
  39. Zemanian, A.H. Distribution Theory and Transform Analysis; Dover Publications: Mineola, NY, USA, 1987. [Google Scholar]
  40. Gel’fand, I.M.; Shilov, G.E. Generalized Functions, Volume 2; AMS Chelsea Publishing: New York, NY, USA, 2015. [Google Scholar]
  41. Euler, L. Elements of Algebra; Hewlett, R.J., Translator; Springer-Verlag: Berlin/Heidelberg, Germany, 1972; Original Work Published in 1770. [Google Scholar]
  42. Cauchy, A.L. Mémoire sur les Intégrales Définies, Prises Entre des Limites Imaginaires [A Memorandum on Definite Integrals Taken between Imaginary Limits]; Submitted to the Académie des Sciences on 28 February; De Bure Frères: Paris, France, 1825. (In French) [Google Scholar]
  43. Edwards, H.M. Riemann’s Zeta Function; Academic Press: New York, NY, USA; London, UK, 1974. [Google Scholar]
  44. Dutt, H.M.; Safdar, M.; Qadir, A. Linearization criteria for two dimensional systems of third order ordinary differential equations by complex approach. Arab. J. Math. 2019, 8, 163–170. [Google Scholar] [CrossRef]
  45. Dutt, H.M.; Qadir, A. Classification of Scalar Fourth Order Ordinary Differential Equations Linearizable via Generalized Lie–Bäcklund Transformations. In Springer Proceedings in Mathematics & Statistics 2018, Proceedings of the Symmetries, Differential Equations and Applications, SDEA-III, İstanbul, Turkey, 14–17 August 2017; Kac, V.G., Olver, P.J., Winternitz, P., Özer, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 67–74. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qadir, A. Complex Connections between Symmetry and Singularity Analysis. Math. Comput. Appl. 2024, 29, 15. https://doi.org/10.3390/mca29010015

AMA Style

Qadir A. Complex Connections between Symmetry and Singularity Analysis. Mathematical and Computational Applications. 2024; 29(1):15. https://doi.org/10.3390/mca29010015

Chicago/Turabian Style

Qadir, Asghar. 2024. "Complex Connections between Symmetry and Singularity Analysis" Mathematical and Computational Applications 29, no. 1: 15. https://doi.org/10.3390/mca29010015

Article Metrics

Back to TopTop