1. Introduction
The standard way of treating realistic physical problems, described by complicated equations, relies on approximate solutions of the latter, since the occurrence of exact solutions is rather an exception. The most often used method is a kind of perturbation theory based on expansions in powers of some small parameters. This way encounters two typical obstacles: the absence of small parameters and divergence of resulting perturbative series. To overcome these difficulties, different methods of constructing approximate solutions have been suggested.
In this review, it is demonstrated how, starting from asymptotic series, there appear general ideas of improving the series convergence and how these ideas lead to the development of powerful methods of optimized perturbation theory and self-similar approximation theory.
2. Asymptotic Expansions
Let us be interested in finding a real function
of a real variable
x. A generalization to complex-valued functions and variables can be straightforwardly done by considering several real functions and variables. The case of a real function and variable is less cumbersome and allows for the easier explanation of the main ideas. Suppose that the function
is a solution of very complicated equations that cannot be solved exactly and allow only for finding an approximate solution for the asymptotically small variable
in the form
The following cases can happen:
- (i)
Expansion over a small variable:
where the prefactor
is a given function. The expansion is asymptotic in the sense of Poincaré [
1,
2], since
with
assumed to be nonzero.
- (ii)
Expansion over a small function:
when the function
tends to zero as
so that
- (iii)
Expansion over an asymptotic sequence:
such that
- (iv)
Generalized asymptotic expansion:
where the coefficients
depend on the variable
x and
is an asymptotic sequence, such that
This type of expansion occurs in the Lindstedt–Poincaré technique [
1,
3,
4] and in the Krylov–Bogolubov averaging method [
5,
6,
7,
8].
- (v)
Expansion over a dummy parameter:
Here, the value of interest corresponds to the limit
, while the series is treated as asymptotic with respect to
, hence
The introduction of dummy parameters is often used in perturbation theory, for instance in the Euler summation method, Nörlund method, and in the Abel method [
9].
Dummy parameters appear when one considers a physical system characterized by a Hamiltonian (or Lagrangian)
H, while starting the consideration with an approximate Hamiltonian
, so that one has
Then, perturbation theory with respect to yields a series in powers of . Different iteration procedures also can be treated as expansions in powers of dummy parameters. Sometimes, perturbation theory with respect to a dummy parameter is termed nonperturbative, keeping in mind that it is not a perturbation theory with respect to some other physical parameter, say a coupling parameter. Of course, this misuse of terminology is confusing, mathematically incorrect, and linguistically awkward. Therefore, it is mathematically correct to call perturbation theory with respect to any parameter perturbation theory.
3. Sequence Transformations
Asymptotic series are usually divergent. To assign to a divergent series an effective limit, one involves different resummation methods employing sequence transformations [
10]. The most often used are the Padé approximation and Borel summation.
3.1. Padé Approximants
The method of Padé approximants sums the series
by means of rational fractions
with the coefficients
and
expressed through
from the requirement of coincidence of the asymptotic expansions
As is evident from their structure, the Padé approximants provide the best approximation for rational functions. However, in general, they have several deficiencies. First of all, they are not uniquely defined, in the sense that, for a series of order
k, there are
different Padé approximants
, with
, where
and there is no uniquely defined general prescription of which of them to choose. Often, one takes the diagonal approximants
, with
. However, these are not necessarily the most accurate [
11]. Second, there is the annoying problem of the appearance of spurious poles.
Third, when the sought function, at small
x behaves as in expansion (
7), but at large
x, it may have the power-law behavior
that should be predicted from the extrapolation of the small-variable expansion, then this extrapolation to a large variable
cannot in principle be done if
is not known or irrational. Let us stress that here one keeps in mind the extrapolation problem from the knowledge of only the small-variable expansion and the absence of knowledge on the behavior of the sought function at large
x. This case should not be confused with the interpolation problem employing the method of two-point Padé approximants, when both expansions at small as well as at large variables are available [
12].
Finally, the convergence of the Padé approximants is not a simple problem [
11,
13], especially when one looks for a summation of a series representing a function that is not known. In the latter case, one tries to observe what is called apparent numerical convergence which may be absent.
As an example of a problem that is not Padé summable [
14,
15], it is possible to mention the series arising in perturbation theory for the eigenvalues of the Hamiltonian
where
,
, and
.
3.2. Borel Summation
The series (
7) can be Borel summed by representing it as the Laplace integral
of the Borel transform
This procedure is regular, since if series (
7) converges, then
Conditions of Borel summability are given by the Watson theorem [
9], according to which a series (
7) is Borel summable if it represents a function analytic in a region and in that region the coefficients satisfy the inequality
for all orders
n.
The problem in this method arises because the sought function is usually unknown, hence its analytic properties are also not known, and the behavior of the coefficients for large orders n is rarely available. When the initial series is convergent, its Borel transform is also convergent and the integration and the summation in the above formula can be interchanged. However, when the initial series is divergent, the interchange of the integration and summation is not allowed. One has, first, to realize a resummation of the Borel transform and after this to perform the integration.
There are series that cannot be Borel summed. As an example, let us mention a model of a disordered quenched system [
16] with the Hamiltonian
in which
,
, and
, so that the free energy, as a function of the coupling parameter, is
where the statistical sum reads as
By analytic means and by direct computation of 200 terms in the perturbation expansion for the free energy, it is shown [
16] that the series is not Borel summable, since the resulting terms do not converge to any limit.
Sometimes, the apparent numerical convergence can be achieved by using the Padé approximation for the Borel transform under the Laplace integral, which is called the Padé–Borel summation.
4. Optimized Perturbation Theory
The mentioned methods of constructing approximate solutions tell us that there are three main ways that could improve the convergence of the resulting series. These are: (i) the choice of an appropriate initial approximation; (ii) change of variables, and (iii) series transformation. However, the pivotal question arises: How can these choices be optimized?
The idea of optimizing system performance comes from optimal control theory for dynamical systems [
17]. Similarly to dynamical systems, the optimization in perturbation theory implies the introduction of control functions in order to achieve series convergence, as was advanced in Refs. [
18,
19,
20] and employed for describing anharmonic crystals [
19,
20,
21,
22,
23,
24] and the theory of melting [
25]. Perturbation theory, complimented by control functions governing the series convergence, is called
optimized perturbation theory.
The introduction of control functions means the reorganization of a divergent series into a convergent one. Formally, this can be represented as the operation
converting an initial series into a new one containing control functions
. Then, the optimized approximants are
The optimization conditions define the control functions in such a way as to make the new series
convergent; because of this, the method is called
optimized perturbation theory. The general approach to formulating optimization conditions is expounded in the review articles [
26,
27], and some particular methods are discussed in Refs. [
28,
29,
30]. Control functions can be implanted into perturbation theory in different ways. The main methods are described below.
4.1. Initial Approximation
Each perturbation theory or iterative procedure starts with an initial approximation. It is possible to accept as an initial approximation not a fixed form but an expression allowing for variations. For concreteness, let us assume a problem, characterized by a Hamiltonian
H containing a coupling parameter
g, is considered. Looking for the eigenvalues of the Hamiltonian, using perturbation theory with respect to the coupling, one comes to a divergent series
As an initial approximating Hamiltonian, one can consider a form
containing trial parameters. For brevity, one parameter
u is written here. Then, the following Hamiltonian is defined:
To find the eigenvalues of the Hamiltonian, one can resort to perturbation theory in powers of the dummy parameter
, yielding
Setting
and defining control functions
from optimization conditions results in the optimized approximants
The explicit way of defining control functions and particular examples are described in the following sections.
4.2. Change of Variables
Control functions can be implanted through the change of variables. Suppose a series
is considered.
Accomplishing the change of the variable
One comes to the functions
. Expanding the latter in powers of the new variable
z, up to the order
k, gives
In terms of the initial variable, this implies
Defining control functions
yields the optimized approximants (
13).
When the variable
x varies between zero and infinity, sometimes it is convenient to resort to the change of variables mapping the interval
to the interval
, passing to a variable
y,
where
and
are control parameters [
31,
32]. The inverse change of variables is
Expanding this in powers of
y, one obtains:
Defining control functions
and
gives the optimized approximants
Other changes of variables can be found in review [
27].
4.3. Sequence Transformations
Control functions can also be implanted by transforming the terms of the given series by means of some transformation,
Defining control functions
gives
. Accomplishing the inverse transformation results in the optimized approximant
As an example, let us consider the fractal transform [
26] that is needed in what follows:
For this transform, the scaling relation is valid:
The scaling power
s plays the role of a control parameter through which control functions
can be introduced [
33,
34,
35].
5. Statistical Physics
In the problems of statistical physics, before calculating observable quantities, one has to find probabilistic characteristics of the system. This can be either probability distributions, or correlation functions, or Green functions. Thus, first, one needs to develop a procedure for finding approximations for these characteristics, and then to calculate the related approximations for observable quantities. Here, this procedure is exemplified for the case of a system described by means of Green functions [
18,
19,
20,
21,
22,
23].
Let us consider Green functions for a quantum statistical system with particle interactions measured by a coupling parameter
g. The single-particle Green function (propagator) satisfies the Dyson equation that can be schematically represented as
where
is an approximate propagator and
is self-energy [
36,
37].
Usually, one takes for the initial approximation
the propagator of noninteracting (free) particles, whose self-energy is zero. Then, iterating the Dyson equation, one gets the relation
which is a series in powers of the coupling parameter
g. Respectively, the sequence of the approximate propagators
can be used for calculating observable quantities
that are given by a series in powers of
g. This is an asymptotic series with respect to the coupling parameter
, which as a rule is divergent for any finite
g.
Instead, it is possible to take for the initial approximation an approximate propagator
containing a control parameter
u. This parameter can, for instance, enter through an external potential [
38] corresponding to the self-energy
. Then, the Dyson equation reads as
Iterating this equation [
39] yields the approximations for the propagator
This iterative procedure is equivalent to the expansion in powers of a dummy parameter.
Being dependent on the control parameter
u, the propagators
generate the observable quantities
also depending on this parameter. Defining control functions
results in the optimized approximants
for observable quantities.
6. Optimization Conditions
The above sections explain how to incorporate control parameters into the sequence of approximants that, after defining control functions, become optimized approximants. Now, it is necessary to provide a recipe for defining control functions.
By their meaning, control functions have to govern the convergence of the sequence of approximants. The Cauchy criterion tells us that a sequence
converges if and only if, for any
, there exists a number
such that
for all
and
p > 0.
In optimal control theory [
17], control functions are defined as the minimizers of a cost functional. Considering the convergence of a sequence, it is natural to introduce the convergence cost functional [
26]
in which the Cauchy difference is defined,
To minimize the convergence cost functional implies the minimization of the Cauchy difference with respect to control functions,
for all
and
.
In order to derive from this condition explicit equations for control functions, one needs to accomplish some rearrangements. If the Cauchy difference is small, this means that it is possible to assume that
is close to
and
is close to
. Then, one can expand the first term of the Cauchy difference in the Taylor series with respect to
in the vicinity of
, which gives
Let us treat
as a function of the discrete variable
p, which allows us to expand this function in the discrete Taylor series
where a finite difference of
m-th order is
As examples of finite differences, let us mention
Thus, the first term in the Cauchy difference can be represented as
Keeping on the right-hand side of representation (
44) a finite number of terms results in the explicit optimization conditions. The zero order is not sufficient for obtaining optimization conditions, since in this order,
hence, the Cauchy difference is automatically zero:
In the first order:
which gives the Cauchy difference
The minimization of the latter with respect to control functions implies
Minimizing the first part on the right-hand side of expression (
47), one gets the
minimal-difference condition
for the control functions
. The ultimate form of this condition is the equality
The minimization of the second part of the right-hand side of expression (
47) leads to the
minimal-derivative conditionThe minimum of condition (
50) is made zero by setting
When this equation has no solution for the control function
, it is straightforward to either set
or to look for the minimum of the derivative
In this way, control functions are defined by one of the above optimization conditions. It is admissible to consider higher orders of expression (
44) obtaining higher orders of optimization conditions [
27].
Control functions can also be defined if some additional information on the sought function
is available. For instance, when the asymptotic behavior of
, as
, is known, where
then the control functions
can be defined from the
asymptotic condition 7. Thermodynamic Potential
As an illustration of using the optimized perturbation theory, let us consider the thermodynamic potential
of the so-called zero-dimensional anharmonic oscillator model with the statistical sum
and the Hamiltonian
Taking for the initial approximation the quadratic Hamiltonian
in which
is a control parameter, one defines
where the perturbation term is
Employing perturbation theory with respect to the dummy parameter
and setting
= 1, leads to the sequence of the approximants
Control functions for the approximations of odd orders are found from the minimal derivative condition
For even orders, the above equation does not possess real-valued solutions, because of which
is set.
Thus, one obtains the optimized approximants,
Their accuracy can be characterized by the maximal percentage error
comparing the optimized approximants with the exact expression (
56). These maximal errors are
As one can see, with just a few terms, quite good accuracy is reached, while the bare perturbation theory in powers of the coupling parameter
g is divergent. Details can be found in review [
27].
This simple model allows for explicitly studying the convergence of the sequence of the optimized approximants. It has been proved [
40,
41] that this sequence converges for both ways of defining control functions, either from the minimal derivative or minimal-difference conditions.
8. Eigenvalue Problem
Another typical example is the calculation of the eigenvalues of Schrödinger operators, defined by the eigenproblem
Let us consider a one-dimensional anharmonic oscillator with the Hamiltonian
in which
and
.
For the initial approximation, let us take the harmonic oscillator model:
with a control parameter
. Following the approach,
is defined, where
Employing the Rayleigh–Schrödinger perturbation theory with respect to the dummy parameter
ε, one obtains the spectrum
where
k enumerates the approximation order and
n = 0,1,2,… is the quantum number labeling the states. The zero-order eigenvalue is
For odd orders, control functions can be found from the optimization condition
For even orders, the above equation does not possess real-valued solutions, because of which let us set
Using optimized perturbation theory results in the eigenvalues
Comparing these with the numerically found eigenvalues
[
42], the percentage errors
are deined.
Then, one can find the maximal error of the
k-th order approximation
which gives
The maximal errors
for the ground state are
Again, good accuracy and numerical convergence are observed. Recall that the bare perturbation theory in powers of the anharmonicity parameter
g diverges for any finite
g. The convergence of the sequence of the optimized approximants can be proved analytically [
43,
44]. More details can be found in Ref. [
27].
9. Nonlinear Schrödinger Equation
The method can be applied to strongly nonlinear systems. Let us illustrate this by considering the eigenvalue problem
with the nonlinear Hamiltonian
Here,
N is the number of trapped atoms, and the potential
is an external potential trapping atoms whose interactions are measured by the parameter
where
is a scattering length. This problem is typical for trapped atoms in Bose–Einstein condensed state [
45,
46,
47,
48,
49,
50,
51,
52].
The trap anisotropy is characterized by the trap aspect ratio
It is convenient to introduce the dimensionless coupling parameter
Measuring energy in units of
and lengths in units of
, one can pass to dimensionless units and write the nonlinear Hamiltonian as
with a dimensionless wave function
.
Applying optimized perturbation theory for the nonlinear Hamiltonian [
53,
54], for the initial approximation the oscillator Hamiltonian,
is taken, in which
u and
v are control parameters. The zero-order spectrum is given by the expression:
with the radial quantum number
, azimuthal quantum number
, and the axial quantum number
. The related wave functions are the Laguerre–Hermite modes. The system Hamiltonian takes the form
where the perturbation term is
Perturbation theory with respect to the dummy parameter
gives the energy levels
. The control functions are defined by the optimization conditions
yielding
and
. Applications to trapped atoms are discussed in Refs. [
53,
54].
10. Hamiltonian Envelopes
When choosing for the initial approximation a Hamiltonian, one confronts the problem of combining two conditions often contradicting each other. From one side, the initial approximation has to possess the properties imitating the studied problem. From the other side, it has to be exactly solvable, providing tools for the explicit calculation of the terms of perturbation theory. If the studied Hamiltonian and the Hamiltonian of the initial approximation are too different, perturbation theory, even being optimized, may be poorly convergent. In such a case, it is possible to invoke the method of Hamiltonian envelopes [
27,
55].
10.1. General Idea
Suppose one takes as an initial approximation a Hamiltonian
that, however, is very different from the considered Hamiltonian
H. The difficulty is that the set of exactly solvable problems is very limited, so that sometimes it is impossible to find another Hamiltonian that would be close to the studied form
H and at the same time solvable. In that case, one can proceed as follows. Notice that, if a Hamiltonian
defines the eigenproblem
then a function
satisfies the eigenproblem
enjoying the same eigenfunctions. The function
can be called the
Hamiltonian envelope [
27,
55]. Note that, because of the property (
90),
can be any real function.
Accepting
as an initial Hamiltonian, one obtains the system Hamiltonian,
with the perturbation term
If one finds a function that better imitates the studied system than the bare , then the convergence of the sequence of approximations can be improved.
The general idea in looking for the function
is as follows. Let the system Hamiltonian be
In addition, let the eigenproblem for a Hamiltonian
enjoy exact solutions, although poorly approximating the given system.
Looking for the function
, one keeps in mind that the most influence on the behavior of wave functions is produced by the region, where the system potential
displays singular behavior tending to
. Suppose this happens at the point
. Then, the function
has to be chosen such that
that is the function
needs to possess the same type of singularity as the potential of the studied system. Below, it is illustrated how this choice is made for concrete examples.
10.2. Power-Law Potentials
Let us consider the Hamiltonian with a power-law potential
in which
,
,
, and
. To pass to dimensionless units, the energy and length quantities are scaled as
The dimensionless coupling parameter is
In what follows, in order not to complicate notation, the bars are omitted above dimensionless quantities. In dimensionless units, one gets the Hamiltonian
In order to return to the dimensional form, it is sufficient to make the substitution
Taking for
the Hamiltonian
Let us compare the potentials
As it is evident, the singular point here is
. To satisfy condition (
97) for
, one needs to take
since
while, for
,
needs to be accepted, since now
In that way, the Hamiltonian envelope is given by the function
10.3. Inverse Power-Law Potentials
The radial Hamiltonian with an inverse power-law potential has the form
in which
,
,
, and
. Again, one can introduce the dimensionless quantities,
and the dimensionless coupling parameter
where
is arbitrary. Since
is arbitrary, it can be chosen such that the coupling parameter be unity,
In dimensionless units, the Hamiltonian becomes:
This reminds us of the Coulomb problem with the Hamiltonian
Here,
u is a control parameter. Comparing the potentials
One can see that, to satisfy condition (
97), one has to take the envelope function as
as far as
Then, the Hamiltonian envelope reads as
10.4. Logarithmic Potential
As one more example, let us take the radial Hamiltonian of arbitrary dimensionality with the logarithmic potential
where
,
,
, and the effective radial quantum number is
Again, one needs to work with dimensionless quantities, defining
and the dimensionless coupling parameter
Then, for the simplicity of notation, let us omit the bars over the letters and get the dimensionless Hamiltonian,
Accepting at the starting step the oscillator Hamiltonian
One has to compare the potentials
Now, the singular points are
and
. This dictates the choice of the envelope function
since
Some explicit calculations can be found in Refs. [
27,
55].
Optimized perturbation theory, whose main points are expounded above, has been applied to a great variety of problems in statistical physics, condensed matter physics, chemical physics, quantum field theory, etc., as is reviewed in Ref. [
27].
11. Optimized Expansions: Summary
As is explained above, the main idea of optimized perturbation theory is the introduction of control parameters that generate order-dependent control functions controlling the convergence of the sequence of optimized approximants. Control functions can be incorporated in the perturbation theory in three main ways: by choosing an initial approximation containing control parameters, by making a change of variables and resorting to a reexpansion trick, or by accomplishing a transformation of the given perturbation sequence. Control functions are defined by optimization conditions. Of course, there are different variants of implanting control functions and choosing the appropriate variables. In some cases, control functions can become control parameters , since constants are just a particular example of functions.
Below, the main ideas are summarized shedding light on the common points for choosing control functions, the variables for expansions, on the convergence of the sequence of optimized approximants, and on the examples when control functions can be reduced to control parameters. In addition, several methods of optimization are compared. To make the discussion transparent, the ideas on the example of a partition function for a zero-dimensional field theory and on the model of one-dimensional anharmonic oscillator are illustrated.
11.1. Expansion over Dummy Parameters
The standard and often used scheme of optimized perturbation theory is based on the incorporation of control functions through initial approximations, as is mentioned in
Section 4.1. Suppose one deals with a Hamiltonian
containing a physical parameter
g, say coupling parameter. When the problem cannot be solved exactly, one takes a trial Hamiltonian
containing control parameters denoted through
u. One introduces the Hamiltonian
in which
is a dummy parameter. One calculates the quantity of interest
by means of perturbation theory in powers of the dummy parameter
,
after which sends this parameter to one,
.
Employing one of the optimization conditions discussed in
Section 6, one finds the control functions
. The most often used optimization conditions are the minimal-difference condition
and the minimal-derivative condition
Substituting the found control functions
into
results in the optimized approximants
This scheme of optimized perturbation theory was suggested and employed in Refs. [
18,
19,
20,
21,
22,
23] and in numerous following publications, as can be inferred from the review works [
26,
27,
28,
29,
30]. As is evident, the same scheme can be used dealing with Lagrangians or action functionals.
Instead of the notation for the dummy parameter, it is admissible to use any other letter, which, as is clear, is of no importance. Sometimes, one denotes the dummy parameter as and, using the same standard scheme, one calls it delta expansion. However, using a different notation does not compose a different method.
11.2. Scaling Relations: Partition Function
The choice of variables for each particular problem is the matter of convenience. Often, it is convenient to use the combinations of parameters naturally occurring in the considered case. These combinations can be found from the scaling relations available for the considered problem.
Let us start with the simple, but instructive, case of the integral representing the partition function (or generating functional) of the so-called zero-dimensional
field theory
with the Hamiltonian
where
.
Invoking the scaling
leads to the relation
Setting
yields the equality
In addition, setting
gives
These relations show that, at a large coupling constant, the expansion is realized over the combination , while, at a small coupling constant, the natural expansion is over .
11.3. Scaling Relations: Anharmonic Oscillator
The other typical example frequently treated for demonstrational purposes is the one-dimensional anharmonic oscillator with the Hamiltonian
where
. Let the energy levels
of the Hamiltonian be of interest.
By scaling the spatial variable,
results in the relation
Setting
gives
while, for
, one gets the relation,
In particular, for the quartic anharmonic oscillator, with
, one has
and
Again, these relations suggest what are the natural variables for expansions over large or small coupling constants.
11.4. Optimized Expansion: Partition Function
The standard scheme of the optimized perturbation theory has been applied to the model (
129) many times, accepting as an initial Hamiltonian the form
in which
is a control parameter. Then, Hamiltonian (
124) becomes:
Note that Hamiltonian (
130) transforms into Equation (
141) by means of the replacement
Following the standard scheme of optimized perturbation theory for the partition function, and using the optimization conditions for defining control functions, it was found [
40,
41,
44] that, at large orders, the control functions behave as
The minimal-difference and minimal-derivative conditions give
. It was proved [
40,
41] that this scheme results in the sequence of optimized approximants for the partition function that converges to the exact numerical value. The convergence occurs for any
.
11.5. Optimized Expansion: Anharmonic Oscillator
The one-dimensional quartic anharmonic oscillator with the Hamiltonian (
134), where
and
, also serves as a typical touchstone for testing approximation methods. The initial approximation is characterized by the harmonic oscillator
in which
is a control parameter. The Hamiltonian (
124) takes the form
As is seen, the transformation from Equation (
132) into Equation (
143) is realized by the same substitution (
140), with the substitution for
that can be represented as
This shows the appearance of the characteristic combination 1− (ω
0/ω)
2 that is used below.
Calculating the energy eigenvalues following the standard scheme, one finds [
43,
44] the control function
with
for both the minimal-difference and minimal-derivative conditions. The convergence of the sequence of optimized approximants to the exact numerical values [
42], found from the solution of the Schrödinger equation, takes place for
.
12. Order-Dependent Mapping
Sometimes, the procedure can be simplified by transforming the initial expansion, say in powers of a coupling constant, into expansions in powers of other parameters. By choosing the appropriate change of variables, it can be possible to reduce the problem to the form where control functions
are downgraded to control parameters
. The change of variables depends on the approximation order, because of which it is called the order-dependent mapping [
56].
12.1. Change of Variables
Let us be given an expansion in powers of a variable
g,
By analyzing the properties of the considered problem, such as its scaling relations and the typical combinations of parameters arising in the process of deriving perturbative series, it is possible to notice that it is convenient to denote some parameter combinations as new variables. Then, one introduces the change of variables
where
is treated as a control parameter. By substituting Equation (
149) into Equation (
148) gives the function
, which has to be expanded in powers of
z up to order
k, leading to the series
The minimal-difference condition
yields the equation
defining the control parameters
. Since, according to Equation (
150), the value
denotes the combination of parameters
, hence it determines the control functions
. The pair
and
, being substituted into Equation (
151), results in the optimized approximants
Thus, the convenience of the chosen change of variables is in the possibility of dealing at the intermediate step with control parameters instead of control functions that appear at a later stage.
12.2. Partition Function
To illustrate the method, let us consider the partition function (
129) following the described scheme [
56]. From the substitution (
142), it is clear that natural combinations of parameters appearing in perturbation theory with respect to the term with
in the Hamiltonian (
141) are
and
Then, the combination of parameters (
150) reads as
In order to simplify the notation, it is possible to notice that the parameter
always enters the equations being divided by
. Therefore, measuring
in units of
is equivalent to setting
. In these units,
Finding from the minimal-difference condition (
152) the control parameter
and using definition (
157) gives the control function
Then, relation (
155) results in the control function
Finally, one gets the partition function .
This procedure, with the change of variables used above, has been shown [
57] to be equivalent to the standard scheme of optimized perturbation theory resulting in optimized approximants
.
12.3. Anharmonic Oscillator
Again using the dimensionless units, as in the previous section, one sets the notations
Then, the combination (
150) becomes:
Similarly to the previous section, one finds the control parameter
and, from Equation (
161), one obtains the control functions
and
. The resulting energy levels
coincide with the optimized approximants
, as has been proved in [
57].
13. Variational Expansions
The given expansion over the coupling constant (
148) can be reexpanded with respect to other variables in several ways. One of the possible reexpansions has been termed variational perturbation theory [
31]. Below, it is illustrated by the example of the anharmonic oscillator in order to compare this type of a reexpansion with other methods.
Let us consider the energy levels of the anharmonic oscillator with the Hamiltonian (
134) with
. As is clear from the scaling relations of
Section 11, the energy can be represented as an expansion
One has the identity,
that is a particular case of the substitution (
142) with the control parameter
and
. Employing the notation
where
It is straightforward to rewrite the identity (
163) in the form
This form is substituted into expansion (
162), which then is reexpanded in powers of the new variable
, while keeping
u untouched and setting
to one. The reexpanded series is truncated at order
k. Comparing this step with the expansion in
Section 11, it is evident that this is equivalent to the expansion over the dummy parameter
. In addition, comparing the expansion over
with the expansion over
z in
Section 12, one can see that they are also equivalent. Thus, one comes to the expansion
where
Then, one substitutes back the expression (
165) for
.
The control function
is defined by the minimal derivative condition, or, when the latter does not have real solutions, by the zero second derivative over
of the energy
. The found control function
is substituted into
, thus giving the optimized approximant
The equivalence of the above expansion in powers of
to the expansions with respect to the dummy parameter
, or with respect to the parameter
ε, becomes evident if one uses the notation of the present section and let us notice that the substitution (
146) can be written as
This makes it immediately clear that the expansion over
, with keeping
u untouched, is identical to the expansion over the dummy parameter
14. Control Functions and Control Parameters
It is important to remark that it is necessary to be cautious introducing control functions through the change of variables and reexpansion. Strictly speaking, such a change cannot be postulated arbitrarily. When the change of variables is analogous to the procedure of using the substitutions, such as Equations (
142), (
146) or (
169), naturally arising in perturbation theory, as in
Section 4, then the results of these variants will be close to each other. However, if the change of variables is arbitrary, the results can be not merely inaccurate, but even qualitatively incorrect [
27,
58].
It is also useful to mention that, employing the term control functions, one keeps in mind that, in particular cases, they can happen to become parameters, although order-dependent. Then, instead of functions , one can have parameters . There is nothing wrong with this, as far as parameters are a particular example of functions. The reduction of control functions to control parameters can occur in the following cases.
It may happen that in the considered problem there exists such a combination of characteristics that compose the quantities
depending only on the approximation order but not depending on the variable
x. For instance, this happens in the mapping of
Section 12, where the combinations
play the role of control parameters. In the case of the partition function, this is the combination (
157) and for the anharmonic oscillator, it is the combination (
161).
The other example is the existence in the applied optimization of several conditions restricting the choice of control parameters. The typical situation is when the optimization condition consists of the comparison of asymptotic expansions of the sought function and of the approximant. Suppose that, in addition to the small-variable expansion,
the large-variable expansion of the sought function,
is known.
Let us assume that the optimized approximant
is found where the control functions
are defined by one of the optimization conditions of
Section 6. These conditions provide a uniform approximation of the sought function on the whole interval of its definition. However, the resulting approximants
are not required to give exact coefficients of asymptotic expansions either at small or at large variable
x. If there is a need that these asymptotic coefficients exactly coincide with the coefficients of the known asymptotic expansions (
170) and (
171), then one has to implant additional control parameters and impose additional asymptotic conditions. This can be done by using the method of corrected Padé approximants [
27,
59,
60,
61,
62]. To this end, the optimized approximant is defined as
where
is a diagonal Padé approximant, whose coefficients
and
, playing the role of control parameters, are prescribed by the accuracy-through-order procedure, so that the asymptotic expansions of Equation (
172) would coincide with the given asymptotic expansions of the sought function at small
x,
and at large
x,
The number of the parameters in the Padé approximant is such that to satisfy the imposed asymptotic conditions (
174) and (
175).
15. Self-Similar Approximation Theory
As has been emphasized above, the idea of introducing control functions for the purpose of governing the convergence of a sequence stems from the optimal control theory, where one introduces control functions in order to regulate the trajectory of a dynamical system, for instance, in order to force the trajectory to converge to a desired point. The analogy between perturbation theory and the theory of dynamical systems has been strengthened even more in the self-similar approximation theory [
26,
27,
63,
64,
65,
66,
67]. The idea of this theory is to consider the transfer from one approximation to another as the motion on the manifold of approximants, where the approximation order plays the role of discrete time.
Suppose, after implanting control functions, as explained in
Section 4, one has the sequence of approximants
. Recall that the control functions can be defined in different ways, as has been discussed above. Therefore, we, actually, have the manifold of approximants associated with different control functions,
This to be called the
approximation manifold. Generally, it could be possible to define a space of approximants. However, the term approximation space is used in mathematics in a different sense [
68]. Thus, one deals with the approximation manifold. The transfer from an approximant
to another approximant
can be understood as the motion with respect to the discrete time, whose role is played by the approximation order
k. The sequence of approximants
with a fixed choice of control functions
defines a trajectory on the approximation manifold (
176).
Let us fix the rheonomic constraint
defining the expansion function
. Recall that, in the theory of dynamical systems, a rheonomic constraint is that whose constraint equations explicitly contain or are dependent upon time. In the case considered here, time is the approximation order
k. The inverse constraint equation is
Let us introduce the endomorphism
by the definition acting as
This endomorphism and the approximants are connected by the equality
The set of endomorphisms forms a dynamical system in discrete time
with the initial condition
By this construction, the sequence of endomorphisms
, forming the dynamical system trajectory, is bijective to the sequence of approximants
. Since control functions, by default, make the sequence of approximants
convergent, this means that there exists a limit
In addition, as far as the sequence of approximants is bijective to the trajectory of the dynamical system, there should exist the limit
This limit, being the final point of the trajectory, implies that it is a fixed point, for which
Thus, finding the limit of an approximation sequence is equivalent to determining the fixed point of the dynamical system trajectory.
One may notice that, for large
p, the self-similar relation holds:
which follows from conditions (
185) and (
186). As far as in the real situations, it is usually impossible to reach the limit of infinite approximation order, the validity of the
self-similar relation for finite approximation orders is assumed:
This relation implies the semi-group property
The dynamical system in discrete time (
182) with the above semi-group property is called cascade (semicascade). The theory of such dynamical systems is well developed [
69,
70]. In the present study, this is an
approximation cascade [
27].
Since, as it is said above, in realistic situations it is possible to deal only with finite approximation orders, one can find not an exact fixed point
, but an approximate fixed point
. The corresponding approximate limit of the considered sequence is
If the form
is obtained by means of a transformation
like in Equation (
27), then the resulting self-similar approximant reads as
16. Embedding Cascade into Flow
Usually, it is more convenient to deal with dynamical systems in continuous time than with systems in discrete time. For this purpose, it is possible to embed the approximation cascade into an
approximation flow, which is denoted as
and implies that the endomorphism in continuous time enjoys the same group property as the endomorphism in discrete time,
that the flow trajectory passes through all points of the cascade trajectory,
and starts from the same initial point,
The self-similar relation (
194) can be represented as the Lie equation
in which
is a velocity field. Integrating the latter equation yields the evolution integral
where
is the time required for reaching the fixed point
from the approximant
. Using relations (
181) and (
190), this can be rewritten as
where
and
.
The velocity field can be represented resorting to the Euler discretization
This is equivalent to the form
in which
One may notice that the velocity field is directly connected with the Cauchy difference (
39), since
As is explained in
Section 6, the Cauchy difference of zero order equals zero, hence in that order the velocity is zero, and
. The Cauchy difference of first order is nontrivial, being given by expression (
46). In this order, the velocity field becomes:
The smaller the velocity, the faster the fixed point is reached. Therefore, control functions should be defined in order to make the velocity field minimal:
Thus, one returns to the optimization conditions of optimized perturbation theory, discussed in
Section 6. Opting for the optimization condition
simplifies the velocity field to the form
17. Stability Conditions
The sequence
defines the trajectory of the approximation cascade that is a type of a dynamical system. The motion of dynamical systems can be stable or unstable. The stability of motion for the approximation cascade can be characterized [
27,
67,
71] similarly to the stability of other dynamical systems [
69,
72,
73]. Dealing with real problems, one usually considers finite steps
k. Therefore, the motion stability can be defined only locally.
The local stability at the
k-th step is described by the local map multiplier
The motion at the step
k, starting from an initial point
f, is stable when
The maximal map multiplier
defines the global stability with respect to
f, provided that
The maximum is taken over all admissible values of f.
The image of the map multiplier (
207) on the manifold of the variable
x is
The motion at the
k-th step at the point
x is stable if
Respectively, the motion is globally stable with respect to the domain of
x when the maximal map multiplier
is such that
The map multiplier at the fixed point
is
The fixed point is locally stable when
and it is globally stable with respect to
f if the maximal multiplier
satisfies the inequality
The above conditions of stability can be rewritten in terms of the local Lyapunov exponents
The motion at the k-th step is stable provided the Laypunov exponents are negative. The occurrence of local stability implies that the calculational procedure should be numerically convergent at the considered steps. Thus, even not knowing the exact solution of the problem and being unable to reach the limit of , one can be sure that the local numerical convergence for finite k is present.
18. Free Energy
In order to demonstrate that the self-similar approximation theory improves the results of optimized perturbation theory, it is instructive to consider the same problem of calculating the free energy (thermodynamic potential) of the model discussed in
Section 7,
with the statistical sum (
57).
Following
Section 7, let us accept the initial Hamiltonian (
59) and define Hamiltonian (
60). Expanding the free energy (
220) in powers of the dummy parameter
ε, one has the sequence of approximants (
62). The control functions ω
k(g)are defined by the optimization conditions (
63) and (
64), which give
where
The rheonomic constraint (
177) takes the form
From here, the expansion function,
is obtained.
The endomorphism (
180) reads as
with the coefficients
given in Refs. [
27,
71,
74,
75], and where
The cascade velocity (
200) becomes:
Taking the evolution integral (
200), with
, one comes to the self-similar approximants
The accuracy of the approximations is described by the percentage errors
where
is the exact numerical value of expression (
220). Here, one has
The map multipliers (
207) are
The coupling parameter
g pertains to the domain
, Then,
, and
. The maximal map multiplier (
209) is found to satisfy the stability condition (
210).
19. Fractal Transform
As is explained in
Section 4, control functions can be incorporated into a perturbative sequence either through initial conditions, or by means of the change of variables, or by a sequence transformation. In the above example of
Section 14, the implantation of control functions into initial conditions are considered. Now, let us study another way, when control functions are incorporated through a sequence transformation.
Let us consider an asymptotic series
in which
is a given function. Actually, it is sufficient to deal with the series
To return to the case of series (
230), one just needs to make the substitution
Following the spirit of self-similarity, let us recall that the latter is usually connected with the power-law scaling and fractal structures [
76,
77,
78]. Therefore, it looks natural to introduce control functions through a fractal transform [
79], say of the type [
26,
27,
33,
34,
35]
The inverse transformation is
As is mentioned in
Section 4, the scaling relation (
30) is valid. The scaling exponent
s plays the role of a control parameter.
In line with the self-similar approximation theory, let us define the rheonomic constraint
yielding the expansion function
The dynamic endomorphism becomes:
In addition, the cascade velocity is
What now remains is to consider the evolution integral.
20. Self-Similar Root Approximants
The differential Equation (
197) can be rewritten in the integral form
Substituting here the cascade velocity (
239) gives the relation
where
Accomplishing the inverse transformation (
234) leads to the equation
The explicit form of the latter is the recurrent relation
Using the notation
and iterating this relation
times results in the self-similar root approximant
This approximant is convenient for the problem of interpolation, where one can meet different situations.
- (i)
The
k coefficients
of the asymptotic expansion (
231) up to the
k-th order are known and the exponent
of the large-variable behavior of the sought function is available, where
although the amplitude
B is not known. Then, setting the control functions
, from Equation (
244), one has:
and the root approximant (
245) becomes:
For large variables
x, the latter behaves as
with the amplitude
and exponent
Equating
to the known exponent
, one finds the root exponent,
All parameters
can be found from the comparison of the initial series (
231) with the small-variable expansion of the root approximant (
248),
which is called the accuracy-trough-order procedure. Knowing all
, the large-variable amplitude
is obtained.
- (ii)
The
k coefficients
of the asymptotic expansion (
231) up to the
k-th order are available and the amplitude
B of the large-variable behavior of the sought function is known, but the large-variable exponent
is not known. Then, the parameters
again are defined through the accuracy-through-order procedure (
253). Equating the amplitudes
and
B results in the exponent
- (iii)
The k coefficients of the asymptotic expansion (231) are known and the large-variable behavior (246) is available, with both the amplitude B and exponent known. Then, as earlier, the parameters are defined from the accuracy-through-order procedure and the exponent is given by Equation (252). The amplitude can be found in two ways, from expression (250) and equating and B. The difference between the resulting values defines the accuracy of the approximant.
- (iv)
The
k terms of the large-variable behavior are given,
where
,
, and the powers
are arranged in descending order,
Then, considering the root approximant (245) for large
, and comparing this expansion with the asymptotic form (255), one finds all parameters
expressed through the coefficients
, and the large-variable internal exponents are
while the external exponent is
It is important to mention that the external exponent
can be defined even without knowing the large-variable behavior of the sought function. This can be done by treating
as a control function defined by an optimization condition from
Section 6. This method has been suggested in Ref. [
33].
Notice that, when it is more convenient to deal with the series for large variables, it is always possible to use the same methods as described above by transferring the large-variable expansions into small-variable ones by means of the change of the variable .
Numerous applications of the self-similar root approximants to different problems are discussed in Refs. [
26,
27,
80,
81,
82,
83,
84].
21. Self-Similar Nested Approximants
It is possible to notice that the series
can be represented as the sequence
etc., through
up to the last term
Applying the self-similar renormalization at each order of the sequence, considering
as variables, one obtains the renormalized sequence:
in which
One comes to the self-similar nested approximant
For large
x, this gives
with the amplitude
and the exponent
If the notation for the external exponent is changed to
and keep the internal exponents constant,
then the large-variable exponent becomes:
When the exponent
of the large-variable behavior is known, where
then, setting
gives
The parameter
m should be defined in order to provide numerical convergence for the sequence
. For instance, if
, then using the asymptotic form
one gets
In the latter case, the nested approximant (
265), with the notation
becomes
The same form can be obtained by setting in the root approximant (
245) all internal exponents
.
The external exponent
can also be defined by resorting to the optimization conditions of
Section 6. Several applications of the nested approximants are given in [
85].
22. Self-Similar Exponential Approximants
When it is expected that the behavior of the sought function is rather exponential, but not of power law, then in the nested approximants of the previous section, one can send
, hence
and
. This results in the self-similar exponential approximants [
86]
in which
The parameters
are to be defined from additional conditions [
26,
27], so that the sequence of the approximants is convergent. It is often sufficient to set
. This expression appears as follows. By its meaning,
is the effective time required for reaching a fixed point from the previous step. Accomplishing
n steps takes the time of order
. The minimal time corresponds to one step. Equating
and one gives
. Some other ways of defining the control parameters
are considered in Refs. [
26,
27,
86].
23. Self-Similar Factor Approximants
By the fundamental theorem of algebra [
87], a polynomial of any degree of one real variable over the field of real numbers can be split in a unique way into a product of irreducible first-degree polynomials over the field of complex numbers. This means that series (259) can be represented in the form
with the coefficients
expressed through
. Applying the self-similar renormalization procedure to each of the factors in turn results in the self-similar factor approximants [
88,
89,
90]
where
The control parameters
and
are defined by the accuracy-through-order procedure by equating the like order terms in the expansions
and
,
In the present case, it is more convenient to compare the corresponding logarithms
This leads to the system of equations
in which
This system of equations enjoys a unique (up to enumeration permutation) solution for all
and
when
k is even, and when
k is odd, one of
can be set to one [
27,
91].
At large values of the variable, one has:
where the amplitude and the large-variable exponent are
If the large-variable exponent is known, for instance from scaling arguments, so that
then equating
and
imposes on the exponents of the factor approximant the constraint
The self-similar factor approximants have been used for a variety of problems, as can be inferred from Refs. [
27,
88,
89,
90,
91,
92].
24. Self-Similar Combined Approximants
It is possible to combine different types of self-similar approximants as well as these approximants and other kinds of approximations.
24.1. Different Types of Approximants
Suppose a small-variable asymptotic expansion,
is given which to be converted into a self-similar approximation. At the same time, one can suspect that the behavior of the sought function is quite different at small and at large variables. In such a case, one can combine different types of self-similar approximants in the following way. Let us take in series (287) several initial terms,
and construct of them a self-similar approximant
. Then, one defines the ratio
and expands the latter in powers of
x as
Constructing a self-similar approximant
, one obtains the combined approximant,
The approximants
and
can be represented by different forms of self-similar approximants. For example, it is possible to define
as a root approximant, while
as a factor or exponential approximant, depending on the expected behavior of the sought function [
93].
24.2. Self-Similar Padé Approximants
Instead of two different self-similar approximants, it is possible, after constructing a self-similar approximant
, to transform the remaining part (290) into a Padé approximant
, with
, so that
The result is the self-similarly corrected Padé approximant, or briefly, the self-similar Padé approximant [
59,
60,
61]
The advantage of this type of approximant is that they can correctly take into account irrational behavior of the sought function, described by the self-similar approximant , as well as the rational behavior represented by the Padé approximant .
Note that Padé approximants (
8) actually are a particular case of the factor approximants (
274), where
M factors correspond to
and
N factors, to
. This is because the Padé approximants can be represented as
24.3. Self-Similar Borel Summation
It is possible to combine self-similar approximants with the method of Borel summation. According to this method, for a series (287), one can define [
9,
94] the Borel–Leroy transform
where
u is chosen in order to improve convergence. The series (294) can be summed using one of the self-similar approximations, and converting
u into a control parameter
, thus getting
. Then, the self-similar Borel–Leroy summation yields the approximant
The case of the standard Borel summation corresponds to
. Then, the self-similar Borel summation gives
In addition to the considered above combinations of different summation methods, one can use other combinations. For example, the combination of exponential approximants and continued fractions has been employed [
95].
25. Self-Similar Data Extrapolation
One often meets the following problem. There exists an ordered dataset
labeled by the index
n, and one is interested in the possibility of predicting the values
outside this dataset. The theory of self-similar approximants suggests a solution to this problem [
27,
96].
Let us consider several last datapoints, for instance the last three points
How many datapoints one needs to take depends on the particular problem considered. For the explicit illustration of the idea, let us take three datapoints. The chosen points can be connected by a polynomial spline, in the present case, by a quadratic spline
defined so that
From this definition, it follows that
Treating polynomial (
299) as an expansion in powers of
t makes it straightforward to employ self-similar renormalization, thus obtaining a self-similar approximant
. For example, resorting to factor approximants, one gets:
with the parameters
The approximants
, with
provide the extrapolation of the initial dataset. The nearest to the dataset extrapolation point can be estimated as
This method can also be used for improving the convergence of the sequence of self-similar approximants. Then, the role of datapoints
is played by the self-similar approximants
. In that case, all parameters
,
,
, as well as
and
become control functions. This method of data extrapolation has been used for several problems, such as predictions for time series and convergence acceleration [
27,
96,
97].
26. Self-Similar Diff-Log Approximants
There is a well known method employed in statistical physics called diff-log transformation [
98,
99]. This transformation for a function
is
The inverse transformation, assuming that the function
is normalized so that
reads as
When one starts with an asymptotic expansion,
the diff-log transformation gives
Expanding the latter in powers of
x yields
with the coefficients
expressed through
. This expansion can be summed by one of the self-similar methods giving
. Involving the inverse transformation (305) results in the self-similar diff-log approximants
A number of applications of the diff-log transformation can be found in Refs. [
61,
83,
99], where it is shown that the combination of the diff-log transform with self-similar approximants gives essentially more accurate results than the diff-log Padé method.
27. Critical Behavior
One says that a function experiences critical behavior at a critical point , when this function at that point either tends to zero or to infinity. It is possible to distinguish two cases, when the critical behavior occurs at infinity, and when at a finite critical point. These two cases are considered below separately.
27.1. Critical Point at Infinity
If the critical behavior happens at infinity, the considered function behaves as
Then, the diff-log transform tends to the form
Here, B is a critical amplitude, while is a critical exponent.
The critical exponents have a special interest for critical phenomena. If one is able to define a self-similar approximation
directly to the studied function
, then the critical exponent can be found from the limit
Otherwise, it can be obtained from the equivalent form,
where a self-similar approximation for the diff-log transform
is needed.
The convenience of using the representation (
313) is in the possibility of employing a larger arsenal of different self-similar approximants. Of course, the factor approximants can be involved in both the cases. However, the root and nested approximants require the knowledge of the large-variable exponent of the sought function, which is not always available. On the contrary, the large-variable behavior of the diff-log transform (
311) is known. Therefore, for constructing a self-similar approximation for the diff-log transform, one can resort to any type of self-similar approximant.
It is necessary to mention that the root and nested approximants can be defined, without knowing the large-variable behavior, by invoking optimization conditions of
Section 6 prescribing the value of the external exponent
, as is explained in Ref. [
33]. However, this method becomes rather cumbersome for high-order approximants.
27.2. Finite Critical Point
If the critical point is located at a finite
that is in the interval
, then
Here, the diff-log transform behaves as
Again, the critical exponent can be derived from the limit
provided a self-similar approximant
is constructed. However, it may happen that the other form
is more convenient, where a self-similar approximant for the diff-log transform
is easier to find. This is because the nearest to zero pole of
defines a critical point
, while the residue (
317) yields a critical exponent.
Note that, by the change of the variable, the problem of a finite critical point can be reduced to the case of critical behavior at infinity. For instance, one can use the change of the variable
or any other change of the variable mapping the interval
to
. Numerous examples of applying the diff-log transform, accompanied by the use of self-similar approximants, are presented in Refs. [
61,
83,
99], where it is also shown that this method essentially outperforms the diff-log Padé variant.
28. Non-Power-Law Behavior
In the previous sections, a kind of power-law behavior of considered functions at large variables were kept in mind. Now, it is useful to make some comments on the use of the described approximation methods for other types of behavior. The most often met types of behavior that can occur at large variables are the exponential and logarithmic behavior. Below, it is shown that the developed methods of self-similar approximants can be straightforwardly applied to any type of behavior.
28.1. Exponential Behavior
The exponential behavior with respect to time happens in many mathematical models employed for describing the growth of population, mass of biosystems, economic expansion, financial markets, various relaxation phenomena, etc. [
100,
101,
102,
103,
104,
105].
When a sought function at a large variable displays exponential behavior, there are several ways of treating this case. First of all, this kind of behavior can be treated by self-similar exponential approximants of
Section 18. The other way is to resort to diff-log approximants of
Section 22 or, simply, to consider the logarithmic transform
If the sought function at large variable behaves as
then
Therefore, the function behaves as
Keeping in mind the asymptotic series (
306), one has:
which can be expanded in powers of
x giving
This is to be converted into a self-similar approximant
, after which one obtains the answer:
Moreover, the small-variable expansion of an exponential function can be directly and exactly represented through self-similar factor approximants [
91]. Really, let us consider the exponential function
Assume that one knows solely the small-variable asymptotic expansion
which is used for constructing factor approximants. In the lowest, second, order, one has:
In the third order, one finds:
and, similarly, in all other orders. Thus, the self-similar factor approximants of all orders reproduce the exponential function exactly:
Some other more complicated functions, containing exponentials, can also be well approximated by factor approximants [
106].
28.2. Logarithmic Behavior
When there is suspicion that the sought function exhibits logarithmic behavior at large variables, it is reasonable to act by analogy with the previous subsection, but now defining the exponential transform
For the asymptotic series (
306), one has:
whose expansion in powers of
x produces
This can be converted into a self-similar approximation
, so that the final answer becomes:
As an example, let us consider the function
with the logarithmic behavior at large variables,
This function has the expansion
with the coefficients
Its exponential transform leads to the series (
330), with the coefficients
Defining factor approximants
, one obtains the approximants (
331), whose large-variable behavior is of correct logarithmic form:
with the amplitudes
,
converging to the exact value
.
29. Critical Temperature Shift
Here, it is shown how the described methods can be used for calculating the critical temperature relative shift caused by interactions in an
N-component scalar field theory in three dimensions. The interactions can be characterized by the gas parameter
in which
is particle density and
, s-wave scattering length. This shift is defined as
where
is the critical temperature in the free field with
, while
is the critical temperature for nonzero
. For example, the critical temperature of the 2-component free field
is the point of the Bose–Einstein condensation of ideal gas. Here,
m is the mass of a boson, and the Boltzmann and Planck constants are set to one. For weak interactions, the temperature shift has been shown [
107,
108] to have the form
where the coefficient
needs to be calculated.
This coefficient can be found in the loop expansion [
109,
110,
111] producing asymptotic series in powers of the variable
where
N is the number of components,
, effective coupling, and
, effective chemical potential. The series in seven loops reads as
whose coefficients for several
N are listed in
Table 1.
However, at the critical point, the effective chemical potential tends to zero, hence the variable
x tends to infinity. Thus, one comes to the necessity of finding the series (
337) for
. The direct application of the limit
to this series of course has no sense. The self-similar factor approximants of
Section 19 are used here defining the approximants
for
, with keeping in mind that
is finite, so that
. Then, the approximants for the sought limit are
The convergence is accelerated by quadratic splines, as is explained in
Section 21 and in Refs. [
97,
112]. The results are displayed in
Table 2, where they are compared with Monte Carlo simulations [
113,
114,
115,
116]. The agreement of the latter with the values calculated by means of the self-similar approximants is very good.
30. Critical Exponents
Calculation of critical exponents is one of the most important problems in the theory of phase transitions. Here, it is shown how the critical exponents can be calculated by using self-similar factor approximants applied to the asymptotic series in powers of the
, where
d is space dimensionality. The
field theory in
is considered. The definition of the critical exponents can be found in reviews [
27,
117].
One usually derives the so-called epsilon expansions for the exponents
,
, and
. The other exponents can be obtained from the scaling relations:
In three dimensions, one has
The number of components N corresponds to different physical systems. Thus, corresponds to dilute polymer solutions, , to the Ising universality class, , to superfluids and the so-called magnetic models, , to the Heisenberg universality class, and , to some models of quantum field theory. Formally, it is admissible to study arbitrary N.
In the case of
, the critical exponents for any
d are known exactly:
For the limit
, the exact exponents are also available:
The latter for
reduce to
The epsilon expansion results in the series
obtained for
, while, in the end, one has to set
. Direct substitution of
in the series (
348) leads to the values having little to do with real exponents. These series require to define their effective sums, which are accomplished by means of the self-similar factor, approximants
Then, let us set and define the final answer as the half sum of the last two factor approximants and .
Let us first illustrate the procedure for the
field theory of the Ising universality class, where there exist the most accurate numerical calculations of the exponents, obtained by Monte Carlo simulations [
117,
118,
119,
120,
121]. The epsilon expansions for
,
, and
can be written [
122] as
If here
is set, one gets senseless values
,
and
. However, by means of the self-similar factor approximants, one obtains the results shown in
Table 3, which are in good agreement with Monte Carlo simulations [
117,
118,
119,
120,
121].
The use of the self-similar factor approximants can be extended to the calculation of the critical exponents for the arbitrary number of components
N of the
symmetric
field theory in
. In the general case, the epsilon expansions [
122] read as
.
Summing these series by means of the self-similar factor approximants [
123,
124], one obtains the exponents, presented in
Table 4. The found values of the exponents are in good agreement with experimental data as well as with the results of numerical methods, such as Padé–Borel summation and Monte Carlo simulations. It is important to stress that, when the exact values of the exponents are known (for
N = −2 and
N → ∞), the self-similar approximants automatically reproduce these exact data.
31. Conclusions
In this review, the basic ideas of the approach allowing for obtaining sensible results from divergent asymptotic series typical of asymptotic perturbation theory are presented. The pivotal points of the approach can be emphasized as follows:
- (i)
The implantation of control functions in the calculational procedure, treating perturbation theory as optimal control theory. Control functions are defined by optimization conditions in order to control the convergence of the sequence of optimized approximants. The optimization conditions are derived from the Cauchy criterion of sequence convergence. The resulting optimized perturbation theory provides good accuracy even for very short series of just a few terms and makes it possible to extrapolate the validity of perturbation theory to arbitrary values of variables, including the limit to infinity.
- (ii)
Reformulation of perturbation theory to the language of dynamical theory, handling the motion from one approximation term to another as the motion in discrete time played by the approximation order. Then, the approximation sequence is bijective to the trajectory of the effective dynamical system, and the sequence limit is equivalent to the trajectory fixed point. The motion near the fixed point enjoys the property of functional self-similarity. The approximation dynamical system in discrete time is called cascade. The approximation cascade can be embedded into a dynamical system in continuous time termed approximation flow. The representation in the language of dynamical theory allows us to improve the accuracy of optimized perturbation theory, to study the procedure stability, and to select the best initial approximation.
- (iii)
Introduction of control functions by means of a fractal transformation of asymptotic series, which results in the derivation of several types of self-similar approximants. These approximants combine the simplicity of their use with good accuracy. They can be employed for the problem of interpolation as well as extrapolation.
The application of the described methods is illustrated by several examples demonstrating the efficiency of the approach.