Next Article in Journal
Measuring α-FPUT Cores and Tails
Previous Article in Journal
Instability of Vertical Throughflows in Bidisperse Porous Media
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

From Asymptotic Series to Self-Similar Approximants

by
Vyacheslav I. Yukalov
1,2,* and
Elizaveta P. Yukalova
3
1
Bogolubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna, Russia
2
Instituto de Fisica de São Carlos, Universidade de São Paulo, CP 369, São Carlos 13560-970, SP, Brazil
3
Laboratory of Information Technologies, Joint Institute for Nuclear Research, 141980 Dubna, Russia
*
Author to whom correspondence should be addressed.
Physics 2021, 3(4), 829-878; https://doi.org/10.3390/physics3040053
Submission received: 20 July 2021 / Revised: 5 September 2021 / Accepted: 14 September 2021 / Published: 27 September 2021
(This article belongs to the Section Statistical Physics and Nonlinear Phenomena)

Abstract

:
The review presents the development of an approach of constructing approximate solutions to complicated physics problems, starting from asymptotic series, through optimized perturbation theory, to self-similar approximation theory. The close interrelation of underlying ideas of these theories is emphasized. Applications of the developed approach are illustrated by typical examples demonstrating that it combines simplicity with good accuracy.

1. Introduction

The standard way of treating realistic physical problems, described by complicated equations, relies on approximate solutions of the latter, since the occurrence of exact solutions is rather an exception. The most often used method is a kind of perturbation theory based on expansions in powers of some small parameters. This way encounters two typical obstacles: the absence of small parameters and divergence of resulting perturbative series. To overcome these difficulties, different methods of constructing approximate solutions have been suggested.
In this review, it is demonstrated how, starting from asymptotic series, there appear general ideas of improving the series convergence and how these ideas lead to the development of powerful methods of optimized perturbation theory and self-similar approximation theory.

2. Asymptotic Expansions

Let us be interested in finding a real function f ( x ) of a real variable x. A generalization to complex-valued functions and variables can be straightforwardly done by considering several real functions and variables. The case of a real function and variable is less cumbersome and allows for the easier explanation of the main ideas. Suppose that the function f ( x ) is a solution of very complicated equations that cannot be solved exactly and allow only for finding an approximate solution for the asymptotically small variable x 0 in the form
f ( x ) f k ( x ) ( x 0 ) .
The following cases can happen:
(i)
Expansion over a small variable:
f k ( x ) = f 0 ( x ) 1 + n = 1 k a n x n ,
where the prefactor f 0 ( x ) is a given function. The expansion is asymptotic in the sense of Poincaré [1,2], since
a n + 1 x n + 1 a n x n 0 ( x 0 ) ,
with a n assumed to be nonzero.
(ii)
Expansion over a small function:
f k ( x ) = f 0 ( x ) 1 + n = 1 k a n φ n ( x ) ,
when the function φ ( x ) tends to zero as x 0 so that
a n + 1 φ n + 1 ( x ) a n φ n ( x ) 0 ( x 0 ) .
(iii)
Expansion over an asymptotic sequence:
f k ( x ) = f 0 ( x ) 1 + n = 1 k a n φ n ( x ) ,
such that
a n + 1 φ n + 1 ( x ) a n φ n ( x ) 0 ( x 0 ) .
(iv)
Generalized asymptotic expansion:
f k ( x ) = f 0 ( x ) 1 + n = 1 k a n ( x ) φ n ( x ) ,
where the coefficients a n ( x ) depend on the variable x and { φ n ( x ) } is an asymptotic sequence, such that
a n + 1 ( x ) φ n + 1 ( x ) a n ( x ) φ n ( x ) 0 ( x 0 ) .
This type of expansion occurs in the Lindstedt–Poincaré technique [1,3,4] and in the Krylov–Bogolubov averaging method [5,6,7,8].
(v)
Expansion over a dummy parameter:
f k ( x ) = f 0 ( x ) 1 + n = 1 k a n ( x ) ε n .
Here, the value of interest corresponds to the limit ε = 1 , while the series is treated as asymptotic with respect to ε 0 , hence
a n + 1 ( x ) ε n + 1 a n ( x ) ε n 0 ( ε 0 ) .
The introduction of dummy parameters is often used in perturbation theory, for instance in the Euler summation method, Nörlund method, and in the Abel method [9].
Dummy parameters appear when one considers a physical system characterized by a Hamiltonian (or Lagrangian) H, while starting the consideration with an approximate Hamiltonian H 0 , so that one has
H ε = H 0 + ( H H 0 ) ε ( ε 1 ) .
Then, perturbation theory with respect to H H 0 yields a series in powers of ε . Different iteration procedures also can be treated as expansions in powers of dummy parameters. Sometimes, perturbation theory with respect to a dummy parameter is termed nonperturbative, keeping in mind that it is not a perturbation theory with respect to some other physical parameter, say a coupling parameter. Of course, this misuse of terminology is confusing, mathematically incorrect, and linguistically awkward. Therefore, it is mathematically correct to call perturbation theory with respect to any parameter perturbation theory.

3. Sequence Transformations

Asymptotic series are usually divergent. To assign to a divergent series an effective limit, one involves different resummation methods employing sequence transformations [10]. The most often used are the Padé approximation and Borel summation.

3.1. Padé Approximants

The method of Padé approximants sums the series
f k ( x ) = n = 0 k a n x n
by means of rational fractions
P M / N ( x ) = a 0 + n = 1 M b n x n 1 + n = 1 N c n x n ( M + N = k ) ,
with the coefficients b n and c n expressed through a n from the requirement of coincidence of the asymptotic expansions
P M / N ( x ) f k ( x ) ( x 0 ) .
As is evident from their structure, the Padé approximants provide the best approximation for rational functions. However, in general, they have several deficiencies. First of all, they are not uniquely defined, in the sense that, for a series of order k, there are C k 2 + 2 different Padé approximants P M / N , with M + N = k , where
C k n = k ! ( k n ) ! n ! = k ( k 1 ) ( k 2 ) ( k n + 1 ) n ! ,
and there is no uniquely defined general prescription of which of them to choose. Often, one takes the diagonal approximants P N / N , with 2 N = k . However, these are not necessarily the most accurate [11]. Second, there is the annoying problem of the appearance of spurious poles.
Third, when the sought function, at small x behaves as in expansion (7), but at large x, it may have the power-law behavior x β that should be predicted from the extrapolation of the small-variable expansion, then this extrapolation to a large variable x 1 cannot in principle be done if β is not known or irrational. Let us stress that here one keeps in mind the extrapolation problem from the knowledge of only the small-variable expansion and the absence of knowledge on the behavior of the sought function at large x. This case should not be confused with the interpolation problem employing the method of two-point Padé approximants, when both expansions at small as well as at large variables are available [12].
Finally, the convergence of the Padé approximants is not a simple problem [11,13], especially when one looks for a summation of a series representing a function that is not known. In the latter case, one tries to observe what is called apparent numerical convergence which may be absent.
As an example of a problem that is not Padé summable [14,15], it is possible to mention the series arising in perturbation theory for the eigenvalues of the Hamiltonian
H = 1 2 d 2 d x 2 + 1 2 x 2 + g x m ,
where x ( , ) , g > 0 , and m 8 .

3.2. Borel Summation

The series (7) can be Borel summed by representing it as the Laplace integral
f k B ( x ) = 0 e t B k ( t x ) d t
of the Borel transform
B k ( t ) n = 0 k a n n ! t n .
This procedure is regular, since if series (7) converges, then
f k ( x ) = n = 0 k a n x n = n = 0 k a n n ! x n 0 e t t n d t =
= 0 e t n = 0 k a n n ! ( t x ) n d t = 0 e t B k ( t x ) d t = f k B ( x ) .
Conditions of Borel summability are given by the Watson theorem [9], according to which a series (7) is Borel summable if it represents a function analytic in a region and in that region the coefficients satisfy the inequality | a n | C n n ! for all orders n.
The problem in this method arises because the sought function is usually unknown, hence its analytic properties are also not known, and the behavior of the coefficients a n for large orders n is rarely available. When the initial series is convergent, its Borel transform is also convergent and the integration and the summation in the above formula can be interchanged. However, when the initial series is divergent, the interchange of the integration and summation is not allowed. One has, first, to realize a resummation of the Borel transform and after this to perform the integration.
There are series that cannot be Borel summed. As an example, let us mention a model of a disordered quenched system [16] with the Hamiltonian
H ( g , φ , ξ ) = ( 1 + ξ ) φ 2 + g φ 4 ,
in which φ ( , ) , ξ ( , ) , and g > 0 , so that the free energy, as a function of the coupling parameter, is
f ( g ) = ln Z ( g , ξ ) exp ξ 2 σ d ξ π σ ,
where the statistical sum reads as
Z ( g , ξ ) = exp { H ( g , φ , ξ ) } d φ π .
By analytic means and by direct computation of 200 terms in the perturbation expansion for the free energy, it is shown [16] that the series is not Borel summable, since the resulting terms do not converge to any limit.
Sometimes, the apparent numerical convergence can be achieved by using the Padé approximation for the Borel transform under the Laplace integral, which is called the Padé–Borel summation.

4. Optimized Perturbation Theory

The mentioned methods of constructing approximate solutions tell us that there are three main ways that could improve the convergence of the resulting series. These are: (i) the choice of an appropriate initial approximation; (ii) change of variables, and (iii) series transformation. However, the pivotal question arises: How can these choices be optimized?
The idea of optimizing system performance comes from optimal control theory for dynamical systems [17]. Similarly to dynamical systems, the optimization in perturbation theory implies the introduction of control functions in order to achieve series convergence, as was advanced in Refs. [18,19,20] and employed for describing anharmonic crystals [19,20,21,22,23,24] and the theory of melting [25]. Perturbation theory, complimented by control functions governing the series convergence, is called optimized perturbation theory.
The introduction of control functions means the reorganization of a divergent series into a convergent one. Formally, this can be represented as the operation
R ^ [ u k ] { f k ( x ) } = { F k ( x , u k ) }
converting an initial series into a new one containing control functions u k ( x ) . Then, the optimized approximants are
f ¯ k ( x ) = F k ( x , u k ( x ) ) .
The optimization conditions define the control functions in such a way as to make the new series { F k ( x , u k ( x ) ) } convergent; because of this, the method is called optimized perturbation theory. The general approach to formulating optimization conditions is expounded in the review articles [26,27], and some particular methods are discussed in Refs. [28,29,30]. Control functions can be implanted into perturbation theory in different ways. The main methods are described below.

4.1. Initial Approximation

Each perturbation theory or iterative procedure starts with an initial approximation. It is possible to accept as an initial approximation not a fixed form but an expression allowing for variations. For concreteness, let us assume a problem, characterized by a Hamiltonian H containing a coupling parameter g, is considered. Looking for the eigenvalues of the Hamiltonian, using perturbation theory with respect to the coupling, one comes to a divergent series
E k ( g ) = n = 1 k c n g n ( g 0 ) .
As an initial approximating Hamiltonian, one can consider a form H 0 ( u ) containing trial parameters. For brevity, one parameter u is written here. Then, the following Hamiltonian is defined:
H ε = H 0 ( u ) + ε [ H H 0 ( u ) ] ( ε 1 ) .
To find the eigenvalues of the Hamiltonian, one can resort to perturbation theory in powers of the dummy parameter ε , yielding
E k ( g , u ) = n = 1 k c n ( g , u ) ε n .
Setting ε = 1 and defining control functions u k ( x ) from optimization conditions results in the optimized approximants
E ¯ k ( g ) = E k ( g , u k ( g ) ) .
The explicit way of defining control functions and particular examples are described in the following sections.

4.2. Change of Variables

Control functions can be implanted through the change of variables. Suppose a series
f k ( x ) = n = 0 k a n x n
is considered.
Accomplishing the change of the variable
x = x ( z , u ) , z = z ( x , u ) ,
One comes to the functions f k ( x ( z , u ) ) . Expanding the latter in powers of the new variable z, up to the order k, gives
f k ( x ( z , u ) ) = n = 0 k b n ( u ) z n ( z 0 ) .
In terms of the initial variable, this implies
F k ( x , u ) = n = 0 k b n ( u ) z n ( x , u ) .
Defining control functions u k ( x ) yields the optimized approximants (13).
When the variable x varies between zero and infinity, sometimes it is convenient to resort to the change of variables mapping the interval [ 0 , ) to the interval ( , 1 ] , passing to a variable y,
x = u ( 1 y ) ω = x ( y , u , ω ) ,
where u > 0 and ω > 0 are control parameters [31,32]. The inverse change of variables is
y = 1 u x 1 / ω = y ( x , u , ω ) .
The series (18) becomes:
f k ( x ( y , u , ω ) ) = n = 0 k a n u ( 1 y ) ω n .
Expanding this in powers of y, one obtains:
F k ( x , u , ω ) = n = 0 k b n ( u , ω ) [ y ( x , u , ω ) ] n .
Defining control functions u k = u k ( x ) and ω k = ω k ( x ) gives the optimized approximants
f ¯ k ( x ) = F k ( x , u k ( x ) , ω k ( x ) ) .
Other changes of variables can be found in review [27].

4.3. Sequence Transformations

Control functions can also be implanted by transforming the terms of the given series by means of some transformation,
T ^ [ u ] f k ( x ) = F k ( x , u k ) .
Defining control functions u k ( x ) gives F ¯ k ( x , u k ( x ) ) . Accomplishing the inverse transformation results in the optimized approximant
f ¯ k ( x ) = T ^ 1 [ u ] F ¯ k ( x , u k ( x ) ) .
As an example, let us consider the fractal transform [26] that is needed in what follows:
T ^ [ s ] f k ( x ) = x s f k ( x ) = F k ( x , s ) .
For this transform, the scaling relation is valid:
F k ( λ x , s ) F k ( x , s ) = f k ( λ x ) f k ( x ) λ s .
The scaling power s plays the role of a control parameter through which control functions s k ( x ) can be introduced [33,34,35].

5. Statistical Physics

In the problems of statistical physics, before calculating observable quantities, one has to find probabilistic characteristics of the system. This can be either probability distributions, or correlation functions, or Green functions. Thus, first, one needs to develop a procedure for finding approximations for these characteristics, and then to calculate the related approximations for observable quantities. Here, this procedure is exemplified for the case of a system described by means of Green functions [18,19,20,21,22,23].
Let us consider Green functions for a quantum statistical system with particle interactions measured by a coupling parameter g. The single-particle Green function (propagator) satisfies the Dyson equation that can be schematically represented as
G ( g ) = G 0 + G 0 Σ ( G ( g ) ) G ( g ) ,
where G 0 is an approximate propagator and Σ ( G ) is self-energy [36,37].
Usually, one takes for the initial approximation G 0 the propagator of noninteracting (free) particles, whose self-energy is zero. Then, iterating the Dyson equation, one gets the relation
G k + 1 ( g ) = G 0 + G 0 Σ k ( G k ( g ) ) G k ( g ) ,
which is a series in powers of the coupling parameter g. Respectively, the sequence of the approximate propagators { G k ( g ) } can be used for calculating observable quantities
A k ( g ) = n = 0 k c n g n ,
that are given by a series in powers of g. This is an asymptotic series with respect to the coupling parameter g 0 , which as a rule is divergent for any finite g.
Instead, it is possible to take for the initial approximation an approximate propagator G 0 ( u ) containing a control parameter u. This parameter can, for instance, enter through an external potential [38] corresponding to the self-energy Σ 0 . Then, the Dyson equation reads as
G ( g ) = G 0 ( u ) + G 0 ( u ) [ Σ ( G ( g ) ) Σ 0 ( G 0 ( u ) ) ] G ( g ) .
Iterating this equation [39] yields the approximations for the propagator
G k + 1 ( g , u ) = G 0 ( u ) + G 0 ( u ) [ Σ k ( G k ( g , u ) ) Σ 0 ( G 0 ( u ) ) ] G k ( g , u ) .
This iterative procedure is equivalent to the expansion in powers of a dummy parameter.
Being dependent on the control parameter u, the propagators G k ( g , u ) generate the observable quantities A k ( g , u ) also depending on this parameter. Defining control functions u k ( g ) results in the optimized approximants
A ¯ k ( g ) = A k ( g , u k ( g ) ) ,
for observable quantities.

6. Optimization Conditions

The above sections explain how to incorporate control parameters into the sequence of approximants that, after defining control functions, become optimized approximants. Now, it is necessary to provide a recipe for defining control functions.
By their meaning, control functions have to govern the convergence of the sequence of approximants. The Cauchy criterion tells us that a sequence { F k ( x , u k ) } converges if and only if, for any ε > 0 , there exists a number k ε such that
| F k + p ( x , u k + p ) F k ( x , u k ) | < ε ,
for all k > k ε and p > 0.
In optimal control theory [17], control functions are defined as the minimizers of a cost functional. Considering the convergence of a sequence, it is natural to introduce the convergence cost functional [26]
C [ u ] = 1 2 k C 2 ( F k + p , F k ) ,
in which the Cauchy difference is defined,
C ( F k + p , F k ) F k + p ( x , u k + p ) F k ( x , u k ) .
To minimize the convergence cost functional implies the minimization of the Cauchy difference with respect to control functions,
min u | C ( F k + p , F k ) | = min u | F k + p ( x , u k + p ) F k ( x , u k ) | ,
for all k 0 and p 0 .
In order to derive from this condition explicit equations for control functions, one needs to accomplish some rearrangements. If the Cauchy difference is small, this means that it is possible to assume that u k + p is close to u k and F k + p is close to F k . Then, one can expand the first term of the Cauchy difference in the Taylor series with respect to u k + p in the vicinity of u k , which gives
F k + p ( x , u k + p ) = n = 0 1 n ! n F k + p ( x , u k ) u k n ( u k + p u k ) n .
Let us treat F k + p as a function of the discrete variable p, which allows us to expand this function in the discrete Taylor series
F k + p ( x , u k ) = m = 0 1 m ! Δ p m F k ( x , u k ) ,
where a finite difference of m-th order is
Δ p m F k = j = 0 m ( 1 ) m j m ! j ! ( m j ) ! F k + j p .
As examples of finite differences, let us mention
Δ p 0 F k = F k , Δ p 1 F k Δ p F k = F k + p F k , Δ p 2 F k = F k + 2 p 2 F k + p + F k .
Thus, the first term in the Cauchy difference can be represented as
F k + p ( x , u k + p ) = m , n = 0 ( u k + p u k ) n m ! n ! n u k n Δ p m F k ( x , u k ) .
Keeping on the right-hand side of representation (44) a finite number of terms results in the explicit optimization conditions. The zero order is not sufficient for obtaining optimization conditions, since in this order,
F k + p ( x , u k + p ) F k ( x , u k ) ,
hence, the Cauchy difference is automatically zero:
C ( F k + p , F k ) 0 .
In the first order:
F k + p ( x , u k + p ) F k + p ( x , u k ) + ( u k + p u k ) u k F k ( x , u k ) ,
which gives the Cauchy difference
C ( F k + p , F k ) = F k + p ( x , u k ) F k ( x , u k ) + ( u k + p u k ) u k F k ( x , u k ) .
The minimization of the latter with respect to control functions implies
min u | C ( F k + p , F k ) | min u | F k + p ( x , u k ) F k ( x , u k ) | +
+ min u ( u k + p u k ) u k F k ( x , u k ) .
Minimizing the first part on the right-hand side of expression (47), one gets the minimal-difference condition
min u | F k + p ( x , u k ) F k ( x , u k ) |
for the control functions u k = u k ( x ) . The ultimate form of this condition is the equality
F k + p ( x , u k ) F k ( x , u k ) = 0 .
The minimization of the second part of the right-hand side of expression (47) leads to the minimal-derivative condition
min u ( u k + p u k ) u k F k ( x , u k ) .
The minimum of condition (50) is made zero by setting
u k F k ( x , u k ) = 0 .
When this equation has no solution for the control function u k , it is straightforward to either set
u k + p = u k u k F k ( x , u k ) 0 ,
or to look for the minimum of the derivative
min u u k F k ( x , u k ) ( u k + p u k ) .
In this way, control functions are defined by one of the above optimization conditions. It is admissible to consider higher orders of expression (44) obtaining higher orders of optimization conditions [27].
Control functions can also be defined if some additional information on the sought function f ( x ) is available. For instance, when the asymptotic behavior of f ( x ) , as x x 0 , is known, where
f ( x ) f a s ( x ) ( x x 0 ) ,
then the control functions u k ( x ) can be defined from the asymptotic condition
F k ( x , u k ) = T ^ [ u ] f k ( x ) T ^ [ u ] f a s ( x ) ( x x 0 ) .

7. Thermodynamic Potential

As an illustration of using the optimized perturbation theory, let us consider the thermodynamic potential
f ( g ) = ln Z ( g ) ,
of the so-called zero-dimensional anharmonic oscillator model with the statistical sum
Z ( g ) = 1 π exp ( H [ φ ] ) d φ ,
and the Hamiltonian
H [ φ ] = φ 2 + g φ 4 ( g > 0 ) .
Taking for the initial approximation the quadratic Hamiltonian
H 0 [ φ ] = ω 2 φ 2 ,
in which ω is a control parameter, one defines
H ε [ φ ] = H 0 [ φ ] + ε Δ H ( ε 1 ) ,
where the perturbation term is
Δ H = H H 0 = ( 1 ω 2 ) φ 2 + g φ 4 .
Employing perturbation theory with respect to the dummy parameter ε and setting ε = 1, leads to the sequence of the approximants
F k ( g , ω ) = ln Z k ( g , ω ) .
Control functions for the approximations of odd orders are found from the minimal derivative condition
F k ( g , ω k ) ω k = 0 ( k = 1 , 3 , ) .
For even orders, the above equation does not possess real-valued solutions, because of which
ω k = ω k 1 ( g ) ( k = 2 , 4 , )
is set.
Thus, one obtains the optimized approximants,
f ¯ k ( g ) = F k ( g , ω k ( g ) ) .
Their accuracy can be characterized by the maximal percentage error
ε k = sup g f ¯ k ( g ) f ( g ) f ( g ) × 100 % ,
comparing the optimized approximants with the exact expression (56). These maximal errors are
ε 1 = 7 % , ε 2 = 4 % , ε 3 = 0.2 % , ε 4 = 0.2 % .
As one can see, with just a few terms, quite good accuracy is reached, while the bare perturbation theory in powers of the coupling parameter g is divergent. Details can be found in review [27].
This simple model allows for explicitly studying the convergence of the sequence of the optimized approximants. It has been proved [40,41] that this sequence converges for both ways of defining control functions, either from the minimal derivative or minimal-difference conditions.

8. Eigenvalue Problem

Another typical example is the calculation of the eigenvalues of Schrödinger operators, defined by the eigenproblem
H ψ n = E n ψ n .
Let us consider a one-dimensional anharmonic oscillator with the Hamiltonian
H = 1 2 d 2 d x 2 + 1 2 x 2 + g x 4 ,
in which x ( , ) and g > 0 .
For the initial approximation, let us take the harmonic oscillator model:
H 0 = 1 2 d 2 d x 2 + ω 2 2 x 2 ,
with a control parameter ω . Following the approach,
H ε = H 0 + ε Δ H ( ε 1 ) ,
is defined, where
Δ H = H H 0 = 1 ω 2 2 x 2 + g x 4 .
Employing the Rayleigh–Schrödinger perturbation theory with respect to the dummy parameter ε, one obtains the spectrum F k ( g , ω ) where k enumerates the approximation order and n = 0,1,2,… is the quantum number labeling the states. The zero-order eigenvalue is
E 0 n ( g , ω ) = n + 1 2 ω .
For odd orders, control functions can be found from the optimization condition
ω k E k n ( g , ω k ) = 0 ( k = 1 , 3 , ) .
For even orders, the above equation does not possess real-valued solutions, because of which let us set
ω k = ω k 1 ( g ) ( k = 2 , 4 , ) .
Using optimized perturbation theory results in the eigenvalues
E ¯ k n ( g ) = E k n ( g , ω k ( g ) ) .
Comparing these with the numerically found eigenvalues E n ( g ) [42], the percentage errors
ε k n ( g ) = E ¯ k n ( g ) E n ( g ) E n ( g ) × 100 % ,
are deined.
Then, one can find the maximal error of the k-th order approximation
ε k = sup n , g ε k n ( g ) ,
which gives
ε 1 = 2 % , ε 2 = 0.8 % , ε 3 = 0.8 % , ε 4 = 0.5 % .
The maximal errors
ε k 0 sup g ε k 0 ( g ) ,
for the ground state are
ε 1 0 = 2 % , ε 2 0 = 0.8 % , ε 3 0 = 0.04 % , ε 4 0 = 0.03 % .
Again, good accuracy and numerical convergence are observed. Recall that the bare perturbation theory in powers of the anharmonicity parameter g diverges for any finite g. The convergence of the sequence of the optimized approximants can be proved analytically [43,44]. More details can be found in Ref. [27].

9. Nonlinear Schrödinger Equation

The method can be applied to strongly nonlinear systems. Let us illustrate this by considering the eigenvalue problem
H [ ψ ] ψ ( r ) = E ψ ( r ) ,
with the nonlinear Hamiltonian
H [ ψ ] = 2 2 m + U ( r ) + N Φ 0 | ψ | 2 .
Here, N is the number of trapped atoms, and the potential
U ( r ) = m 2 ω 2 x 2 + y 2 + α 2 z 2 ,
is an external potential trapping atoms whose interactions are measured by the parameter
Φ 0 = 4 π a s m ,
where a s is a scattering length. This problem is typical for trapped atoms in Bose–Einstein condensed state [45,46,47,48,49,50,51,52].
The trap anisotropy is characterized by the trap aspect ratio
α ω z ω = l l z 2 l 1 m ω , l z 1 m ω z .
It is convenient to introduce the dimensionless coupling parameter
g 4 π a s l N .
Measuring energy in units of ω and lengths in units of l , one can pass to dimensionless units and write the nonlinear Hamiltonian as
H [ ψ ] = 2 2 + 1 2 r 2 + α 2 z 2 + g | ψ | 2 ,
with a dimensionless wave function ψ .
Applying optimized perturbation theory for the nonlinear Hamiltonian [53,54], for the initial approximation the oscillator Hamiltonian,
H 0 [ ψ ] = 2 2 + 1 2 u 2 r 2 + v 2 z 2 ,
is taken, in which u and v are control parameters. The zero-order spectrum is given by the expression:
E n m j ( 0 ) = ( 2 n + | m | + 1 ) u + 1 2 + j v ,
with the radial quantum number n = 0 , 1 , 2 , , azimuthal quantum number m = 0 , ± 1 , ± 2 , , and the axial quantum number j = 0 , 1 , 2 , . The related wave functions are the Laguerre–Hermite modes. The system Hamiltonian takes the form
H ε = H 0 [ ψ ] + ε Δ H ( ε 1 ) ,
where the perturbation term is
Δ H H [ ψ ] H 0 [ ψ ] = 1 2 1 u 2 r 2 + 1 2 α 2 v 2 z 2 + g | ψ | 2 .
Perturbation theory with respect to the dummy parameter ε gives the energy levels E n m j ( k ) . The control functions are defined by the optimization conditions
u k E n m j ( k ) ( g , u k , v k ) = 0 , v k E n m j ( k ) ( g , u k , v k ) = 0 ,
yielding u k = u k ( g ) and v k = v k ( g ) . Applications to trapped atoms are discussed in Refs. [53,54].

10. Hamiltonian Envelopes

When choosing for the initial approximation a Hamiltonian, one confronts the problem of combining two conditions often contradicting each other. From one side, the initial approximation has to possess the properties imitating the studied problem. From the other side, it has to be exactly solvable, providing tools for the explicit calculation of the terms of perturbation theory. If the studied Hamiltonian and the Hamiltonian of the initial approximation are too different, perturbation theory, even being optimized, may be poorly convergent. In such a case, it is possible to invoke the method of Hamiltonian envelopes [27,55].

10.1. General Idea

Suppose one takes as an initial approximation a Hamiltonian H 0 that, however, is very different from the considered Hamiltonian H. The difficulty is that the set of exactly solvable problems is very limited, so that sometimes it is impossible to find another Hamiltonian that would be close to the studied form H and at the same time solvable. In that case, one can proceed as follows. Notice that, if a Hamiltonian H 0 defines the eigenproblem
H 0 ψ n = E n ψ n ,
then a function h ( H 0 ) satisfies the eigenproblem
h ( H 0 ) ψ n = h ( E n ) ψ n ,
enjoying the same eigenfunctions. The function h ( H ) can be called the Hamiltonian envelope [27,55]. Note that, because of the property (90), h ( H 0 ) can be any real function.
Accepting h ( H 0 ) as an initial Hamiltonian, one obtains the system Hamiltonian,
H ε = h ( H 0 ) + ε Δ H ( ε 1 ) ,
with the perturbation term
Δ H = H h ( H 0 ) .
If one finds a function h ( H 0 ) that better imitates the studied system than the bare H 0 , then the convergence of the sequence of approximations can be improved.
The general idea in looking for the function h ( H 0 ) is as follows. Let the system Hamiltonian be
H = 2 2 m + V ( r ) .
In addition, let the eigenproblem for a Hamiltonian
H 0 = 2 2 m + V 0 ( r ) ,
enjoy exact solutions, although poorly approximating the given system.
Looking for the function h ( H 0 ) , one keeps in mind that the most influence on the behavior of wave functions is produced by the region, where the system potential V ( r ) displays singular behavior tending to ± . Suppose this happens at the point r s . Then, the function h ( H 0 ) has to be chosen such that
0 < lim r r s h ( V 0 ( r ) ) V ( r ) < ,
that is the function h ( H 0 ) needs to possess the same type of singularity as the potential of the studied system. Below, it is illustrated how this choice is made for concrete examples.

10.2. Power-Law Potentials

Let us consider the Hamiltonian with a power-law potential
H = 1 2 m d 2 d x 2 + m ω 0 2 2 x 2 + A x ν ( ν > 0 ) ,
in which x ( , ) , ω 0 > 0 , A > 0 , and ν > 0 . To pass to dimensionless units, the energy and length quantities are scaled as
H ¯ = H ω 0 , x ¯ = m ω 0 x .
The dimensionless coupling parameter is
g A ω 0 ( m ω 0 ) ν / 2 .
In what follows, in order not to complicate notation, the bars are omitted above dimensionless quantities. In dimensionless units, one gets the Hamiltonian
H = 1 2 d 2 d x 2 + x 2 2 + g x ν .
In order to return to the dimensional form, it is sufficient to make the substitution
H H ω 0 , x m ω 0 x .
Taking for H 0 the Hamiltonian
H 0 = 1 2 d 2 d x 2 + u 2 2 x 2 ,
Let us compare the potentials
V ( x ) = x 2 2 + g x ν , V 0 ( x ) = u 2 2 x 2 .
As it is evident, the singular point here is x s = . To satisfy condition (97) for ν < 2 , one needs to take
h ( V 0 ) = V 0 ( 0 < ν < 2 ) ,
since
lim x h ( V 0 ( x ) ) V ( x ) = u 2 ( ν < 2 ) ,
while, for ν > 2 ,
h ( V 0 ) = V 0 ν / 2 ( ν > 2 ) ,
needs to be accepted, since now
lim x h ( V 0 ( x ) ) V ( x ) = 1 g u 2 2 ν / 2 ( ν > 2 ) .
In that way, the Hamiltonian envelope is given by the function
h ( H 0 ) = H 0 , 0 < ν 2 H 0 ν / 2 , ν 2 .

10.3. Inverse Power-Law Potentials

The radial Hamiltonian with an inverse power-law potential has the form
H = 1 2 m d 2 d r 2 + l ( l + 1 ) 2 m r 2 A r ν ,
in which r 0 , l = 0 , 1 , 2 , , A > 0 , and ν > 0 . Again, one can introduce the dimensionless quantities,
H ¯ H ω , r ¯ m ω r ,
and the dimensionless coupling parameter
g A ω ( m ω ) ν / 2 ,
where ω is arbitrary. Since ω is arbitrary, it can be chosen such that the coupling parameter be unity,
g = 1 , ω 2 ν = m ν A 2 .
In dimensionless units, the Hamiltonian becomes:
H = 1 2 d 2 d r 2 + l ( l + 1 ) 2 r 2 1 r ν .
This reminds us of the Coulomb problem with the Hamiltonian
H 0 = 1 2 d 2 d r 2 + l ( l + 1 ) 2 r 2 u r .
Here, u is a control parameter. Comparing the potentials
V ( r ) = 1 r ν , V 0 ( r ) = u r .
One can see that, to satisfy condition (97), one has to take the envelope function as
h ( V 0 ) = | V 0 | ν ,
as far as
h ( V 0 ( r ) ) V ( r ) = u ν .
Then, the Hamiltonian envelope reads as
h ( H 0 ) = | H 0 | ν ( ν > 0 ) .

10.4. Logarithmic Potential

As one more example, let us take the radial Hamiltonian of arbitrary dimensionality with the logarithmic potential
H = 1 2 m d 2 d r 2 + l d ( l d + 1 ) 2 m r 2 + B ln r b ,
where r > 0 , B > 0 , b > 0 , and the effective radial quantum number is
l d l + d 3 2 .
Again, one needs to work with dimensionless quantities, defining
H ¯ = m b 2 H , r ¯ = r b ,
and the dimensionless coupling parameter
g m b 2 B .
Then, for the simplicity of notation, let us omit the bars over the letters and get the dimensionless Hamiltonian,
H = 1 2 d 2 d r 2 + l d ( l d + 1 ) 2 r 2 + g ln r .
Accepting at the starting step the oscillator Hamiltonian
H 0 = 1 2 d 2 d r 2 + l d ( l d + 1 ) 2 r 2 + u 2 2 r 2 .
One has to compare the potentials
V ( r ) = g ln r , V 0 ( r ) = u 2 2 r 2 .
Now, the singular points are r s = 0 and r s = . This dictates the choice of the envelope function
h ( V 0 ) = ln V 0 ,
since
lim r 0 h ( V 0 ( r ) ) V ( r ) = lim r h ( V 0 ( r ) ) V ( r ) = 2 g .
Some explicit calculations can be found in Refs. [27,55].
Optimized perturbation theory, whose main points are expounded above, has been applied to a great variety of problems in statistical physics, condensed matter physics, chemical physics, quantum field theory, etc., as is reviewed in Ref. [27].

11. Optimized Expansions: Summary

As is explained above, the main idea of optimized perturbation theory is the introduction of control parameters that generate order-dependent control functions controlling the convergence of the sequence of optimized approximants. Control functions can be incorporated in the perturbation theory in three main ways: by choosing an initial approximation containing control parameters, by making a change of variables and resorting to a reexpansion trick, or by accomplishing a transformation of the given perturbation sequence. Control functions are defined by optimization conditions. Of course, there are different variants of implanting control functions and choosing the appropriate variables. In some cases, control functions u k ( x ) can become control parameters u k , since constants are just a particular example of functions.
Below, the main ideas are summarized shedding light on the common points for choosing control functions, the variables for expansions, on the convergence of the sequence of optimized approximants, and on the examples when control functions can be reduced to control parameters. In addition, several methods of optimization are compared. To make the discussion transparent, the ideas on the example of a partition function for a zero-dimensional φ 4 field theory and on the model of one-dimensional anharmonic oscillator are illustrated.

11.1. Expansion over Dummy Parameters

The standard and often used scheme of optimized perturbation theory is based on the incorporation of control functions through initial approximations, as is mentioned in Section 4.1. Suppose one deals with a Hamiltonian H ( g ) containing a physical parameter g, say coupling parameter. When the problem cannot be solved exactly, one takes a trial Hamiltonian H 0 ( u ) containing control parameters denoted through u. One introduces the Hamiltonian
H ε ( g , u ) = H 0 ( u ) + ε [ H ( g ) H 0 ( u ) ] ,
in which ε is a dummy parameter. One calculates the quantity of interest F k ( g , u , ε ) by means of perturbation theory in powers of the dummy parameter ε ,
F k ( g , u , ε ) = n = 0 k c n ( g , u ) ε n ,
after which sends this parameter to one, ε 1 .
Employing one of the optimization conditions discussed in Section 6, one finds the control functions u k ( g ) . The most often used optimization conditions are the minimal-difference condition
F k ( g , u , 1 ) F k 1 ( g , u , 1 ) = 0 , u = u k ( g ) ,
and the minimal-derivative condition
u F k ( g , u , 1 ) = 0 , u = u k ( g ) .
Substituting the found control functions u k ( g ) into F k ( g , u k ( g ) , 1 ) results in the optimized approximants
F ¯ k ( g ) = F k ( g , u k ( g ) , 1 ) .
This scheme of optimized perturbation theory was suggested and employed in Refs. [18,19,20,21,22,23] and in numerous following publications, as can be inferred from the review works [26,27,28,29,30]. As is evident, the same scheme can be used dealing with Lagrangians or action functionals.
Instead of the notation ε for the dummy parameter, it is admissible to use any other letter, which, as is clear, is of no importance. Sometimes, one denotes the dummy parameter as δ and, using the same standard scheme, one calls it delta expansion. However, using a different notation does not compose a different method.

11.2. Scaling Relations: Partition Function

The choice of variables for each particular problem is the matter of convenience. Often, it is convenient to use the combinations of parameters naturally occurring in the considered case. These combinations can be found from the scaling relations available for the considered problem.
Let us start with the simple, but instructive, case of the integral representing the partition function (or generating functional) of the so-called zero-dimensional φ 4 field theory
Z ( g , ω 0 ) = 1 π exp ( H [ φ ] ) d φ ,
with the Hamiltonian
H [ φ ] = ω 0 2 φ 2 + g φ 4 ,
where g > 0 .
Invoking the scaling φ λ φ leads to the relation
Z ( g , ω 0 ) = λ Z λ 4 g , λ 2 ω 0 .
Setting λ = g 1 / 4 yields the equality
Z ( g , ω 0 ) = 1 g 1 / 4 Z 1 , ω 0 g .
In addition, setting λ = ω 0 1 / 2 gives
Z ( g , ω 0 ) = 1 ω 0 Z g ω 0 2 , 1 .
These relations show that, at a large coupling constant, the expansion is realized over the combination ω 0 / g , while, at a small coupling constant, the natural expansion is over g / ω 0 2 .

11.3. Scaling Relations: Anharmonic Oscillator

The other typical example frequently treated for demonstrational purposes is the one-dimensional anharmonic oscillator with the Hamiltonian
H = 1 2 2 x 2 + ω 0 2 2 x 2 + g x p ( p > 0 ) ,
where g > 0 . Let the energy levels E ( g , ω 0 ) of the Hamiltonian be of interest.
By scaling the spatial variable, x λ x results in the relation
E ( g , ω 0 ) = λ 2 E λ p + 2 g , λ 2 ω 0 .
Setting λ = g 1 / ( 1 + p / 2 ) gives
E ( g , ω 0 ) = g 1 / ( 1 + p / 2 ) E 1 , ω 0 g 1 / ( 1 + p / 2 ) ,
while, for λ = ω 0 1 / 2 , one gets the relation,
E ( g , ω 0 ) = ω 0 E g ω 0 1 + p / 2 , 1 .
In particular, for the quartic anharmonic oscillator, with p = 4 , one has
E ( g , ω 0 ) = g 1 / 3 E 1 , ω 0 g 1 / 3 ,
and
E ( g , ω 0 ) = ω 0 E g ω 0 3 , 1 ( p = 4 ) .
Again, these relations suggest what are the natural variables for expansions over large or small coupling constants.

11.4. Optimized Expansion: Partition Function

The standard scheme of the optimized perturbation theory has been applied to the model (129) many times, accepting as an initial Hamiltonian the form
H 0 = ω 2 φ 2 ,
in which ω is a control parameter. Then, Hamiltonian (124) becomes:
H ε = ω 2 φ 2 + ε [ ( ω 0 2 ω 2 ) φ 2 + g φ 4 .
Note that Hamiltonian (130) transforms into Equation (141) by means of the replacement
ω 0 2 ω 2 + ε ( ω 0 2 ω 2 ) , g ε g .
Following the standard scheme of optimized perturbation theory for the partition function, and using the optimization conditions for defining control functions, it was found [40,41,44] that, at large orders, the control functions behave as
ω k ( g ) α ω 0 ( g k ) 1 / 4 ( k ) .
The minimal-difference and minimal-derivative conditions give α = 1.0729855 . It was proved [40,41] that this scheme results in the sequence of optimized approximants for the partition function that converges to the exact numerical value. The convergence occurs for any α > α c = 0.9727803 .

11.5. Optimized Expansion: Anharmonic Oscillator

The one-dimensional quartic anharmonic oscillator with the Hamiltonian (134), where p = 4 and g > 0 , also serves as a typical touchstone for testing approximation methods. The initial approximation is characterized by the harmonic oscillator
H 0 = 1 2 2 x 2 + ω 2 2 x 2 ,
in which ω is a control parameter. The Hamiltonian (124) takes the form
H ε = 1 2 2 x 2 + ω 2 2 x 2 + ε [ 1 2 ( ω 0 2 ω 2 ) x 2 + g x 4 .
As is seen, the transformation from Equation (132) into Equation (143) is realized by the same substitution (140), with the substitution for ω 0 that can be represented as
ω 0 2 ω 2 1 ε ( 1 ω 0 2 ω 2 .
This shows the appearance of the characteristic combination 1− (ω0/ω)2 that is used below.
Calculating the energy eigenvalues following the standard scheme, one finds [43,44] the control function
ω k ( g ) α ω 0 ( g k ) 1 / 3 ( k ) ,
with α 1 for both the minimal-difference and minimal-derivative conditions. The convergence of the sequence of optimized approximants to the exact numerical values [42], found from the solution of the Schrödinger equation, takes place for α > α c = 0.9062077 .

12. Order-Dependent Mapping

Sometimes, the procedure can be simplified by transforming the initial expansion, say in powers of a coupling constant, into expansions in powers of other parameters. By choosing the appropriate change of variables, it can be possible to reduce the problem to the form where control functions u k ( g ) are downgraded to control parameters u k . The change of variables depends on the approximation order, because of which it is called the order-dependent mapping [56].

12.1. Change of Variables

Let us be given an expansion in powers of a variable g,
f k ( g ) = n = 0 k a n g n .
By analyzing the properties of the considered problem, such as its scaling relations and the typical combinations of parameters arising in the process of deriving perturbative series, it is possible to notice that it is convenient to denote some parameter combinations as new variables. Then, one introduces the change of variables
g = g ( z , u ) = u y ( z ) ,
where
u = g y ( z ) = u ( g , z ) ,
is treated as a control parameter. By substituting Equation (149) into Equation (148) gives the function f k ( g ( z , u ) ) , which has to be expanded in powers of z up to order k, leading to the series
F k ( z , u ) = n = 0 k b n ( u ) z n .
The minimal-difference condition
F k ( z , u ) F k 1 ( z , u ) = 0 ,
yields the equation
b k ( u ) = 0 , u = u k ,
defining the control parameters u k . Since, according to Equation (150), the value u k denotes the combination of parameters u k = u k ( g , z ) , hence it determines the control functions z k ( g ) . The pair u k and z k ( g ) , being substituted into Equation (151), results in the optimized approximants
F ¯ k ( g ) = F k ( z k ( g ) , u k ) .
Thus, the convenience of the chosen change of variables is in the possibility of dealing at the intermediate step with control parameters instead of control functions that appear at a later stage.

12.2. Partition Function

To illustrate the method, let us consider the partition function (129) following the described scheme [56]. From the substitution (142), it is clear that natural combinations of parameters appearing in perturbation theory with respect to the term with ε in the Hamiltonian (141) are
z = ω 2 ω 0 2 ω 2 = 1 ω 0 2 ω 2 ,
and
y ( z ) = ω 2 ( ω 2 ω 0 2 ) ω 0 4 = z ( 1 z ) 2 .
Then, the combination of parameters (150) reads as
u = g z ( 1 z ) 2 = g ω 0 4 ω 2 ( ω 2 ω 0 2 ) .
In order to simplify the notation, it is possible to notice that the parameter ω always enters the equations being divided by ω 0 . Therefore, measuring ω in units of ω 0 is equivalent to setting ω 0 1 . In these units,
z = 1 1 ω 2 , u = g ω 2 ( ω 2 1 ) .
Finding from the minimal-difference condition (152) the control parameter u k and using definition (157) gives the control function
z k ( g ) = 1 u k 2 + 4 g u k u k 2 g .
Then, relation (155) results in the control function
ω k ( g ) = 1 1 z k ( g ) = 1 2 1 + 1 + 4 g u k 1 / 2 .
Finally, one gets the partition function Z k ( z k ( g ) , u k ) .
This procedure, with the change of variables used above, has been shown [57] to be equivalent to the standard scheme of optimized perturbation theory resulting in optimized approximants Z ¯ k ( g ) .

12.3. Anharmonic Oscillator

Again using the dimensionless units, as in the previous section, one sets the notations
y ( z ) = z ( 1 z ) 3 / 2 , z = 1 1 ω 2 .
Then, the combination (150) becomes:
u = g ω ( ω 2 1 ) = g z ( 1 z ) 3 / 2 .
Similarly to the previous section, one finds the control parameter u k and, from Equation (161), one obtains the control functions z k ( g ) and ω k ( g ) . The resulting energy levels E k ( z k ( g ) , u k ) coincide with the optimized approximants E ¯ k ( g ) , as has been proved in [57].

13. Variational Expansions

The given expansion over the coupling constant (148) can be reexpanded with respect to other variables in several ways. One of the possible reexpansions has been termed variational perturbation theory [31]. Below, it is illustrated by the example of the anharmonic oscillator in order to compare this type of a reexpansion with other methods.
Let us consider the energy levels of the anharmonic oscillator with the Hamiltonian (134) with p = 4 . As is clear from the scaling relations of Section 11, the energy can be represented as an expansion
E k ( g , ω 0 ) = ω 0 n = 0 k c n g ω 0 3 n .
One has the identity,
ω 0 2 = ω 2 + ω 0 2 ω 2 ,
that is a particular case of the substitution (142) with the control parameter ω and ε = 1 . Employing the notation
z = 1 ω 0 2 ω 2 = g ω 0 3 ω 3 u ,
where
u = g ω 0 3 ω 3 z = g ω 0 3 ω ( ω 2 ω 0 2 ) = g z ( 1 z ) 3 / 2 ,
It is straightforward to rewrite the identity (163) in the form
ω 0 = ω 1 z = ω 1 g ω 3 u .
This form is substituted into expansion (162), which then is reexpanded in powers of the new variable g / ω 3 , while keeping u untouched and setting ω 0 to one. The reexpanded series is truncated at order k. Comparing this step with the expansion in Section 11, it is evident that this is equivalent to the expansion over the dummy parameter ε . In addition, comparing the expansion over g / ω 3 with the expansion over z in Section 12, one can see that they are also equivalent. Thus, one comes to the expansion
E k ( g , ω ) = ω n = 0 k d n ( u ) g ω 3 n ,
where
d n ( u ) = j = 0 n C n j 1 u n j .
Then, one substitutes back the expression (165) for u = u ( g , ω ) .
The control function ω k ( g ) is defined by the minimal derivative condition, or, when the latter does not have real solutions, by the zero second derivative over ω of the energy E k ( g , ω ) . The found control function ω k ( g ) is substituted into E k ( g , ω ) , thus giving the optimized approximant
E ¯ k ( g ) = E k ( g , ω k ( g ) ) .
The equivalence of the above expansion in powers of g / ω 3 to the expansions with respect to the dummy parameter ε , or with respect to the parameter ε, becomes evident if one uses the notation of the present section and let us notice that the substitution (146) can be written as
ω 0 ω 1 ε z = ω 1 ε g ω 3 u .
This makes it immediately clear that the expansion over g / ω 3 , with keeping u untouched, is identical to the expansion over the dummy parameter ε .

14. Control Functions and Control Parameters

It is important to remark that it is necessary to be cautious introducing control functions through the change of variables and reexpansion. Strictly speaking, such a change cannot be postulated arbitrarily. When the change of variables is analogous to the procedure of using the substitutions, such as Equations (142), (146) or (169), naturally arising in perturbation theory, as in Section 4, then the results of these variants will be close to each other. However, if the change of variables is arbitrary, the results can be not merely inaccurate, but even qualitatively incorrect [27,58].
It is also useful to mention that, employing the term control functions, one keeps in mind that, in particular cases, they can happen to become parameters, although order-dependent. Then, instead of functions u k ( x ) , one can have parameters u k . There is nothing wrong with this, as far as parameters are a particular example of functions. The reduction of control functions to control parameters can occur in the following cases.
It may happen that in the considered problem there exists such a combination of characteristics that compose the quantities u k depending only on the approximation order but not depending on the variable x. For instance, this happens in the mapping of Section 12, where the combinations u k = u k ( g , ω k ( g ) ) play the role of control parameters. In the case of the partition function, this is the combination (157) and for the anharmonic oscillator, it is the combination (161).
The other example is the existence in the applied optimization of several conditions restricting the choice of control parameters. The typical situation is when the optimization condition consists of the comparison of asymptotic expansions of the sought function and of the approximant. Suppose that, in addition to the small-variable expansion,
f k ( x ) = n = 0 k a n x n ( x 0 ) ,
the large-variable expansion of the sought function,
f ( x ) n = 0 p b n 1 x n ( x ) ,
is known.
Let us assume that the optimized approximant F k ( x , u k ( x ) ) is found where the control functions u k ( x ) are defined by one of the optimization conditions of Section 6. These conditions provide a uniform approximation of the sought function on the whole interval of its definition. However, the resulting approximants F k ( x , u k ( x ) ) are not required to give exact coefficients of asymptotic expansions either at small or at large variable x. If there is a need that these asymptotic coefficients exactly coincide with the coefficients of the known asymptotic expansions (170) and (171), then one has to implant additional control parameters and impose additional asymptotic conditions. This can be done by using the method of corrected Padé approximants [27,59,60,61,62]. To this end, the optimized approximant is defined as
f ¯ k ( x ) = F k ( x , u k ( x ) ) P N / N ( x ) ,
where
P N / N ( x ) = a 0 + n = 1 N c n x n 1 + n = 1 N d n x n ,
is a diagonal Padé approximant, whose coefficients c n and d n , playing the role of control parameters, are prescribed by the accuracy-through-order procedure, so that the asymptotic expansions of Equation (172) would coincide with the given asymptotic expansions of the sought function at small x,
f ¯ k ( x ) n = 0 k a n x n ( x 0 ) ,
and at large x,
f ¯ k ( x ) n = 0 p b n 1 x n ( x ) .
The number of the parameters in the Padé approximant is such that to satisfy the imposed asymptotic conditions (174) and (175).

15. Self-Similar Approximation Theory

As has been emphasized above, the idea of introducing control functions for the purpose of governing the convergence of a sequence stems from the optimal control theory, where one introduces control functions in order to regulate the trajectory of a dynamical system, for instance, in order to force the trajectory to converge to a desired point. The analogy between perturbation theory and the theory of dynamical systems has been strengthened even more in the self-similar approximation theory [26,27,63,64,65,66,67]. The idea of this theory is to consider the transfer from one approximation to another as the motion on the manifold of approximants, where the approximation order plays the role of discrete time.
Suppose, after implanting control functions, as explained in Section 4, one has the sequence of approximants F k ( x , u k ) . Recall that the control functions can be defined in different ways, as has been discussed above. Therefore, we, actually, have the manifold of approximants associated with different control functions,
A = { F k ( x , u k ) : R × R R ; k = 0 , 1 , 2 , } .
This to be called the approximation manifold. Generally, it could be possible to define a space of approximants. However, the term approximation space is used in mathematics in a different sense [68]. Thus, one deals with the approximation manifold. The transfer from an approximant F k to another approximant F k + p can be understood as the motion with respect to the discrete time, whose role is played by the approximation order k. The sequence of approximants F k ( x , u k ) with a fixed choice of control functions u k = u k ( x ) defines a trajectory on the approximation manifold (176).
Let us fix the rheonomic constraint
F 0 ( x , u k ( x ) ) = f , x = x k ( f ) ,
defining the expansion function x k ( f ) . Recall that, in the theory of dynamical systems, a rheonomic constraint is that whose constraint equations explicitly contain or are dependent upon time. In the case considered here, time is the approximation order k. The inverse constraint equation is
x k ( F 0 ( x , u k ( x ) ) ) = x .
Let us introduce the endomorphism
y k ( f ) : Z + × R R ,
by the definition acting as
y k ( f ) F k ( x k ( f ) , u k ( x k ( f ) ) ) .
This endomorphism and the approximants are connected by the equality
y k ( F 0 ( x , u k ( x ) ) ) = F k ( x , u k ( x ) ) .
The set of endomorphisms forms a dynamical system in discrete time
{ y k ( f ) : Z + × R R } ,
with the initial condition
y 0 ( f ) = f .
By this construction, the sequence of endomorphisms { y k ( f ) } , forming the dynamical system trajectory, is bijective to the sequence of approximants { F k ( x , u k ( x ) ) } . Since control functions, by default, make the sequence of approximants F k ( x , u k ( x ) ) convergent, this means that there exists a limit
F * ( x ) = lim k F k ( x , u k ( x ) ) .
In addition, as far as the sequence of approximants is bijective to the trajectory of the dynamical system, there should exist the limit
y * ( f ) = lim k y k ( f ) .
This limit, being the final point of the trajectory, implies that it is a fixed point, for which
y p ( y * ( f ) ) = y * ( f ) ( p 0 ) .
Thus, finding the limit of an approximation sequence is equivalent to determining the fixed point of the dynamical system trajectory.
One may notice that, for large p, the self-similar relation holds:
y k + p ( f ) y k ( y p ( f ) ) ( p ) ,
which follows from conditions (185) and (186). As far as in the real situations, it is usually impossible to reach the limit of infinite approximation order, the validity of the self-similar relation for finite approximation orders is assumed:
y k + p ( f ) = y k ( y p ( f ) ) .
This relation implies the semi-group property
y k · y p = y k + p , y 0 = 1 .
The dynamical system in discrete time (182) with the above semi-group property is called cascade (semicascade). The theory of such dynamical systems is well developed [69,70]. In the present study, this is an approximation cascade [27].
Since, as it is said above, in realistic situations it is possible to deal only with finite approximation orders, one can find not an exact fixed point y * ( f ) , but an approximate fixed point y k * ( f ) . The corresponding approximate limit of the considered sequence is
F k * ( x , u k ( x ) ) = y k * ( F k ( x , u k ( x ) ) ) .
If the form F k ( x , u k ) is obtained by means of a transformation
F k ( x , u k ) = T ^ [ u ] f k ( x ) ,
like in Equation (27), then the resulting self-similar approximant reads as
f k * ( x ) = T ^ 1 [ u ] F k * ( x , u k ( x ) ) .

16. Embedding Cascade into Flow

Usually, it is more convenient to deal with dynamical systems in continuous time than with systems in discrete time. For this purpose, it is possible to embed the approximation cascade into an approximation flow, which is denoted as
{ y k ( f ) : Z + × R R } { y ( t , f ) : R + × R R } ,
and implies that the endomorphism in continuous time enjoys the same group property as the endomorphism in discrete time,
y ( t + t , f ) = y ( t , y ( t , f ) ) ,
that the flow trajectory passes through all points of the cascade trajectory,
y ( t , f ) = y k ( f ) ( t = k ) ,
and starts from the same initial point,
y ( 0 , f ) = f .
The self-similar relation (194) can be represented as the Lie equation
t y ( t , f ) = v ( y ( t , f ) ) ,
in which v ( y ( t , f ) ) is a velocity field. Integrating the latter equation yields the evolution integral
y k y k * d y v ( y ) = t k ,
where t k is the time required for reaching the fixed point y k * = y k * ( f ) from the approximant y k = y k ( f ) . Using relations (181) and (190), this can be rewritten as
F k F k * d f v k ( f ) = t k ,
where F k = F k ( x , u k ( x ) ) and F k * = F k * ( x , u k ( x ) ) .
The velocity field can be represented resorting to the Euler discretization
v k ( f ) = y k + 1 ( f ) y k ( f ) .
This is equivalent to the form
v k ( f ) = F k + 1 ( x k + 1 , u k + 1 ) F k ( x k , u k ) ,
in which
x k = x k ( f ) , u k = u k ( x k ) = u k ( x k ( f ) ) .
One may notice that the velocity field is directly connected with the Cauchy difference (39), since
v k ( f ) = C ( F k + 1 , F k ) .
As is explained in Section 6, the Cauchy difference of zero order equals zero, hence in that order the velocity is zero, and F k * = F k . The Cauchy difference of first order is nontrivial, being given by expression (46). In this order, the velocity field becomes:
v k ( f ) = F k + 1 ( x k , u k ) F k ( x k , u k ) + ( u k + 1 u k ) u k F k ( x k , u k ) .
The smaller the velocity, the faster the fixed point is reached. Therefore, control functions should be defined in order to make the velocity field minimal:
min u | v k ( f ) | = min u | C ( F k + 1 , F k ) | .
Thus, one returns to the optimization conditions of optimized perturbation theory, discussed in Section 6. Opting for the optimization condition
( u k + 1 u k ) u k F k ( x k , u k ) = 0 ,
simplifies the velocity field to the form
v k ( f ) = F k + 1 ( x k , u k ) F k ( x k , u k ) .

17. Stability Conditions

The sequence { y k ( f ) } defines the trajectory of the approximation cascade that is a type of a dynamical system. The motion of dynamical systems can be stable or unstable. The stability of motion for the approximation cascade can be characterized [27,67,71] similarly to the stability of other dynamical systems [69,72,73]. Dealing with real problems, one usually considers finite steps k. Therefore, the motion stability can be defined only locally.
The local stability at the k-th step is described by the local map multiplier
μ k ( f ) δ y k ( f ) δ y 0 ( f ) = y k ( f ) f .
The motion at the step k, starting from an initial point f, is stable when
| μ k ( f ) | < 1 .
The maximal map multiplier
μ k sup f | μ k ( f ) | ,
defines the global stability with respect to f, provided that
μ k < 1 .
The maximum is taken over all admissible values of f.
The image of the map multiplier (207) on the manifold of the variable x is
m k ( x ) = μ k ( F 0 ( x , u k ( x ) ) .
The motion at the k-th step at the point x is stable if
| m k ( x ) | < 1 .
Respectively, the motion is globally stable with respect to the domain of x when the maximal map multiplier
m k sup x | m k ( x ) | ,
is such that
m k < 1 .
The map multiplier at the fixed point y k * ( f ) is
μ k * ( f ) y k * ( f ) f .
The fixed point is locally stable when
| μ k * ( f ) | < 1 ,
and it is globally stable with respect to f if the maximal multiplier
μ k * sup f | μ k * ( f ) | ,
satisfies the inequality
μ k * < 1 .
The above conditions of stability can be rewritten in terms of the local Lyapunov exponents
λ k ( f ) 1 k ln | μ k ( f ) | , λ k * ( f ) 1 k ln | μ k * ( f ) | .
The motion at the k-th step is stable provided the Laypunov exponents are negative. The occurrence of local stability implies that the calculational procedure should be numerically convergent at the considered steps. Thus, even not knowing the exact solution of the problem and being unable to reach the limit of k , one can be sure that the local numerical convergence for finite k is present.

18. Free Energy

In order to demonstrate that the self-similar approximation theory improves the results of optimized perturbation theory, it is instructive to consider the same problem of calculating the free energy (thermodynamic potential) of the model discussed in Section 7,
f ( g ) = ln Z ( g ) ( g > 0 ) ,
with the statistical sum (57).
Following Section 7, let us accept the initial Hamiltonian (59) and define Hamiltonian (60). Expanding the free energy (220) in powers of the dummy parameter ε, one has the sequence of approximants (62). The control functions ωk(g)are defined by the optimization conditions (63) and (64), which give
ω k ( g ) = 1 2 1 + 1 + 12 s k g 1 / 2 ,
where
s 1 = s 2 = 1 , s 3 = s 4 = 2.239674 .
The rheonomic constraint (177) takes the form
F 0 ( g , ω k ( g ) ) = ln ω k ( g ) = f .
From here, the expansion function,
g k ( f ) = e 2 f 3 s k e 2 f 1 ,
is obtained.
The endomorphism (180) reads as
y k ( f ) = f + n = 1 k A k n α n ( f ) ,
with the coefficients A k n given in Refs. [27,71,74,75], and where
α ( f ) = 1 e 2 f .
The cascade velocity (200) becomes:
v k ( f ) = A k + 1 , k + 1 α k + 1 ( f ) .
Taking the evolution integral (200), with t k = 1 , one comes to the self-similar approximants
f k * ( g ) = F k * ( g , ω k ( g ) ) .
The accuracy of the approximations is described by the percentage errors
ε k * sup g f k * ( g ) f ( g ) f ( g ) × 100 % ,
where f ( g ) is the exact numerical value of expression (220). Here, one has
ε 1 * = 3 % , ε 2 * = 2 % , ε 3 * = 0.1 % .
The map multipliers (207) are
μ k ( f ) = 1 + 2 [ 1 α ( f ) ] n = 1 k n A k n α n 1 ( f ) .
The coupling parameter g pertains to the domain [ 0 , ) , Then, f [ 0 , ) , and α ( f ) [ 0.1 ) . The maximal map multiplier (209) is found to satisfy the stability condition (210).

19. Fractal Transform

As is explained in Section 4, control functions can be incorporated into a perturbative sequence either through initial conditions, or by means of the change of variables, or by a sequence transformation. In the above example of Section 14, the implantation of control functions into initial conditions are considered. Now, let us study another way, when control functions are incorporated through a sequence transformation.
Let us consider an asymptotic series
f k ( x ) = f 0 ( x ) 1 + n = 1 k a n x n ,
in which f 0 ( x ) is a given function. Actually, it is sufficient to deal with the series
f k ( x ) = 1 + n = 1 k a n x n .
To return to the case of series (230), one just needs to make the substitution
f k ( x ) f 0 ( x ) f k ( x ) .
Following the spirit of self-similarity, let us recall that the latter is usually connected with the power-law scaling and fractal structures [76,77,78]. Therefore, it looks natural to introduce control functions through a fractal transform [79], say of the type [26,27,33,34,35]
F k ( x , s ) = x s f k ( x ) .
The inverse transformation is
f k ( x ) = x s F k ( x , s ) .
With the series (227):
F k ( x , s ) = x s + n = 1 k a n x n + s .
As is mentioned in Section 4, the scaling relation (30) is valid. The scaling exponent s plays the role of a control parameter.
In line with the self-similar approximation theory, let us define the rheonomic constraint
F 0 ( x , s ) = x s = f ,
yielding the expansion function
x ( f ) = f 1 / s .
The dynamic endomorphism becomes:
y k ( f ) = f + n = 1 k a n f 1 + n / s .
In addition, the cascade velocity is
v k ( f ) = y k ( f ) y k 1 ( f ) = a k f 1 + n / s .
What now remains is to consider the evolution integral.

20. Self-Similar Root Approximants

The differential Equation (197) can be rewritten in the integral form
y k 1 * y k * d f v k ( f ) = t k .
Substituting here the cascade velocity (239) gives the relation
y k * ( f ) = y k 1 * ( f ) 1 / m k + A k m k ,
where
m k s k k , A k a k t k m k .
Accomplishing the inverse transformation (234) leads to the equation
f k * ( x ) = x s y k * ( f ) ( f = x s ) .
The explicit form of the latter is the recurrent relation
f k * ( x ) = f k 1 * ( x ) 1 / m k + A k x k m k .
Using the notation
n j m j m j + 1 = ( j + 1 ) s j j s j + 1 ( j = 1 , 2 , , k 1 ) ,
and iterating this relation k 1 times results in the self-similar root approximant
f k * ( x ) = ( 1 + A 1 x ) n 1 + A 2 x 2 n 2 + + A k x k m k .
This approximant is convenient for the problem of interpolation, where one can meet different situations.
(i)
The k coefficients a n of the asymptotic expansion (231) up to the k-th order are known and the exponent β of the large-variable behavior of the sought function is available, where
f ( x ) B x β ( x ) ,
although the amplitude B is not known. Then, setting the control functions s j = s , from Equation (244), one has:
n j = j + 1 j ( j = 1 , 2 , , k 1 ) ,
and the root approximant (245) becomes:
f k * ( x ) = ( 1 + A 1 x ) 2 + A 2 x 2 3 / 2 + A 3 x 3 4 / 3 + + A k x k m k .
For large variables x, the latter behaves as
f k * ( x ) B k x β k ( x ) ,
with the amplitude
B k = A 1 2 + A 2 3 / 2 + A 3 4 / 3 + + A k m k ,
and exponent
β k = k m k .
Equating β k to the known exponent β , one finds the root exponent,
m k = β k ( β k = β ) .
All parameters A n can be found from the comparison of the initial series (231) with the small-variable expansion of the root approximant (248),
f k * ( x ) f k ( x ) ( x 0 ) ,
which is called the accuracy-trough-order procedure. Knowing all A n , the large-variable amplitude B k is obtained.
(ii)
The k coefficients a n of the asymptotic expansion (231) up to the k-th order are available and the amplitude B of the large-variable behavior of the sought function is known, but the large-variable exponent β is not known. Then, the parameters A n again are defined through the accuracy-through-order procedure (253). Equating the amplitudes B k and B results in the exponent
m k = ln B ln ( ( ( A 1 2 + A 2 ) 3 / 2 + A 3 ) 4 / 3 + + A k ) ( B k = B ) .
(iii)
The k coefficients a n of the asymptotic expansion (231) are known and the large-variable behavior (246) is available, with both the amplitude B and exponent β known. Then, as earlier, the parameters A n are defined from the accuracy-through-order procedure and the exponent m k is given by Equation (252). The amplitude B k can be found in two ways, from expression (250) and equating B k and B. The difference between the resulting values defines the accuracy of the approximant.
(iv)
The k terms of the large-variable behavior are given,
f ( x ) n = 1 k b n x β n ( x ) ,
where b 1 0 , β 1 0 , and the powers β n are arranged in descending order,
β n > β n + 1 ( n = 1 , 2 , , k 1 ) .
Then, considering the root approximant (245) for large x , and comparing this expansion with the asymptotic form (255), one finds all parameters A n expressed through the coefficients b n , and the large-variable internal exponents are
n j = j + 1 j + 1 j ( β k j + 1 β k j ) ( j = 1 , 2 , , k 1 ) ,
while the external exponent is
m k = β 1 k .
It is important to mention that the external exponent m k can be defined even without knowing the large-variable behavior of the sought function. This can be done by treating m k as a control function defined by an optimization condition from Section 6. This method has been suggested in Ref. [33].
Notice that, when it is more convenient to deal with the series for large variables, it is always possible to use the same methods as described above by transferring the large-variable expansions into small-variable ones by means of the change of the variable z = 1 / x .
Numerous applications of the self-similar root approximants to different problems are discussed in Refs. [26,27,80,81,82,83,84].

21. Self-Similar Nested Approximants

It is possible to notice that the series
f k ( x ) = 1 + n = 1 k a n x n ,
can be represented as the sequence
f k ( x ) = 1 + φ 1 ( x ) , φ 1 ( x ) = a 1 x ( 1 + φ 2 ( x ) ) ,
φ 2 ( x ) = a 2 a 1 x ( 1 + φ 3 ( x ) ) , φ 3 ( x ) = a 3 a 2 x ( 1 + φ 4 ( x ) ) ,
etc., through
φ j ( x ) = a j a j 1 x ( 1 + φ j + 1 ( x ) ) ( j = 1 , 2 , k 1 ) ,
up to the last term
φ k ( x ) = a k a k 1 x .
Applying the self-similar renormalization at each order of the sequence, considering φ j as variables, one obtains the renormalized sequence:
f k * ( x ) = 1 + b 1 φ 1 * ( x ) n 1 , φ j * ( x ) = a j a j 1 x 1 + b j + 1 φ j + 1 * ( x ) n j + 1 ,
in which
b j = t j n j , n j = s j ( j = 1 , 2 , k 1 ) .
Using the notation
A j = a j a j 1 b j = a j t j a j 1 n j ,
One comes to the self-similar nested approximant
f k * ( x ) = 1 + A 1 x 1 + A 2 x ( 1 + A k x ) n k n k 1 n 1 .
For large x, this gives
f k * ( x ) B k x k β ( x ) ,
with the amplitude
B k = A 1 n 1 A 2 n 1 n 2 A 3 n 1 n 2 n 3 A k n 1 n 2 n 3 n k ,
and the exponent
β k = n 1 + n 1 n 2 + n 1 n 2 n 3 + + n 1 n 2 n 3 n k .
If the notation for the external exponent is changed to
m k n 1 ,
and keep the internal exponents constant,
n j = m ( j = 2 , 3 , , k ) ,
then the large-variable exponent becomes:
β k = 1 m k 1 m m k .
When the exponent β of the large-variable behavior is known, where
f ( x ) x β ( x ) ,
then, setting β k = β gives
m k = 1 m 1 m k β ( β k = β ) .
The parameter m should be defined in order to provide numerical convergence for the sequence { f k * ( x ) } . For instance, if m = 1 , then using the asymptotic form
m k 1 ( 1 m ) k ( m 1 ) ,
one gets
m k = β k ( m = 1 ) .
In the latter case, the nested approximant (265), with the notation
D n j = 1 n A j ,
becomes
f k * ( x ) = 1 + D 1 x + D 2 x 2 + D 3 x 3 + + D k x k m k .
The same form can be obtained by setting in the root approximant (245) all internal exponents n j = 1 .
The external exponent m k can also be defined by resorting to the optimization conditions of Section 6. Several applications of the nested approximants are given in [85].

22. Self-Similar Exponential Approximants

When it is expected that the behavior of the sought function is rather exponential, but not of power law, then in the nested approximants of the previous section, one can send n j , hence b j 0 and A j 0 . This results in the self-similar exponential approximants [86]
f k * ( x ) = exp C 1 x exp C 2 x exp C 3 x exp ( C k x ) ,
in which
C n = a n a n 1 t n ( n = 1 , 2 , , k ) .
The parameters t n are to be defined from additional conditions [26,27], so that the sequence of the approximants is convergent. It is often sufficient to set t n = 1 / n . This expression appears as follows. By its meaning, t n is the effective time required for reaching a fixed point from the previous step. Accomplishing n steps takes the time of order n t n . The minimal time corresponds to one step. Equating n t n and one gives t n = 1 / n . Some other ways of defining the control parameters t n are considered in Refs. [26,27,86].

23. Self-Similar Factor Approximants

By the fundamental theorem of algebra [87], a polynomial of any degree of one real variable over the field of real numbers can be split in a unique way into a product of irreducible first-degree polynomials over the field of complex numbers. This means that series (259) can be represented in the form
f k ( x ) = j = 1 k ( 1 + b j x ) ,
with the coefficients b j expressed through a n . Applying the self-similar renormalization procedure to each of the factors in turn results in the self-similar factor approximants [88,89,90]
f k * ( x ) = j = 1 N k ( 1 + A j x ) n j ,
where
N k = k / 2 , k = 2 , 4 , 6 , ( k + 1 ) / 2 , k = 1 , 3 , 5 , .
The control parameters A j and n j are defined by the accuracy-through-order procedure by equating the like order terms in the expansions f k * ( x ) and f k ( x ) ,
f k * ( x ) f k ( x ) ( x 0 ) .
In the present case, it is more convenient to compare the corresponding logarithms
ln f k * ( x ) ln f k ( x ) ( x 0 ) .
This leads to the system of equations
j = 1 N k n j A j n = D n ( n = 1 , 2 , , k ) ,
in which
D n ( 1 ) n 1 ( n 1 ) ! lim x 0 d n d x n ln 1 + m = 1 n a m x m .
This system of equations enjoys a unique (up to enumeration permutation) solution for all A j and n j when k is even, and when k is odd, one of A j can be set to one [27,91].
At large values of the variable, one has:
f k * ( x ) B k x β k ( x ) ,
where the amplitude and the large-variable exponent are
B k = j = 1 N k A j n j , β k = j = 1 N k n j .
If the large-variable exponent is known, for instance from scaling arguments, so that
f ( x ) x β ( x ) ,
then equating β k and β imposes on the exponents of the factor approximant the constraint
β k = j = 1 N k n j = β .
The self-similar factor approximants have been used for a variety of problems, as can be inferred from Refs. [27,88,89,90,91,92].

24. Self-Similar Combined Approximants

It is possible to combine different types of self-similar approximants as well as these approximants and other kinds of approximations.

24.1. Different Types of Approximants

Suppose a small-variable asymptotic expansion,
f k ( x ) = j = 0 k a j x j ( x 0 ) ,
is given which to be converted into a self-similar approximation. At the same time, one can suspect that the behavior of the sought function is quite different at small and at large variables. In such a case, one can combine different types of self-similar approximants in the following way. Let us take in series (287) several initial terms,
f n ( x ) = j = 0 n a j x j ( n < k ) ,
and construct of them a self-similar approximant f n * ( x ) . Then, one defines the ratio
C k / n ( x ) f k ( x ) f n * ( x ) ,
and expands the latter in powers of x as
C k / n ( x ) = 1 + j = n + 1 k b j x j ( x 0 ) .
Constructing a self-similar approximant C k / n * ( x ) , one obtains the combined approximant,
f k * ( x ) = f n * ( x ) C k / n * ( x ) .
The approximants f n * ( x ) and C k / n * ( x ) can be represented by different forms of self-similar approximants. For example, it is possible to define f n * ( x ) as a root approximant, while C k / n * ( x ) as a factor or exponential approximant, depending on the expected behavior of the sought function [93].

24.2. Self-Similar Padé Approximants

Instead of two different self-similar approximants, it is possible, after constructing a self-similar approximant f n * ( x ) , to transform the remaining part (290) into a Padé approximant P M / N ( x ) , with M + N = k n , so that
P M / N ( x ) C k / n ( x ) ( x 0 ) .
The result is the self-similarly corrected Padé approximant, or briefly, the self-similar Padé approximant [59,60,61]
f k * ( x ) = f n * ( x ) P M / N ( x ) .
The advantage of this type of approximant is that they can correctly take into account irrational behavior of the sought function, described by the self-similar approximant f n * ( x ) , as well as the rational behavior represented by the Padé approximant P M / N ( x ) .
Note that Padé approximants (8) actually are a particular case of the factor approximants (274), where M factors correspond to n j = 1 and N factors, to n j = 1 . This is because the Padé approximants can be represented as
P M / N ( x ) = a 0 m = 1 M ( 1 + A m x ) n = 1 N ( 1 + C n x ) 1 .

24.3. Self-Similar Borel Summation

It is possible to combine self-similar approximants with the method of Borel summation. According to this method, for a series (287), one can define [9,94] the Borel–Leroy transform
B k ( t , u ) n = 0 k a n Γ ( n + 1 + u ) t n ,
where u is chosen in order to improve convergence. The series (294) can be summed using one of the self-similar approximations, and converting u into a control parameter u k , thus getting B k * ( t , u k ) . Then, the self-similar Borel–Leroy summation yields the approximant
f k * ( x ) = 0 e t t u k B k * ( t x , u k ) d t .
The case of the standard Borel summation corresponds to u k = 0 . Then, the self-similar Borel summation gives
f k * ( x ) = 0 e t B k * ( t x ) d t .
In addition to the considered above combinations of different summation methods, one can use other combinations. For example, the combination of exponential approximants and continued fractions has been employed [95].

25. Self-Similar Data Extrapolation

One often meets the following problem. There exists an ordered dataset
{ f n : n = 1 , 2 , , k } ,
labeled by the index n, and one is interested in the possibility of predicting the values f k + p outside this dataset. The theory of self-similar approximants suggests a solution to this problem [27,96].
Let us consider several last datapoints, for instance the last three points
{ g 0 f k 2 , g 1 f k 1 , g 2 f k } .
How many datapoints one needs to take depends on the particular problem considered. For the explicit illustration of the idea, let us take three datapoints. The chosen points can be connected by a polynomial spline, in the present case, by a quadratic spline
g ( t ) = a + b t + c t 2 ,
defined so that
g ( 0 ) = g 0 = f k 2 , g ( 1 ) = g 1 = f k 1 , g ( 2 ) = g 2 = f k .
From this definition, it follows that
a = f k 2 , b = 1 2 ( f k 4 f k 1 + 3 f k 2 ) , c = 1 2 ( f k 2 f k 1 + f k 2 ) .
Treating polynomial (299) as an expansion in powers of t makes it straightforward to employ self-similar renormalization, thus obtaining a self-similar approximant g * ( t ) . For example, resorting to factor approximants, one gets:
g * ( t ) = a ( 1 + A t ) m ,
with the parameters
A = b 2 a c a b , m = b 2 b 2 a c .
The approximants g * ( t ) , with t 2 provide the extrapolation of the initial dataset. The nearest to the dataset extrapolation point can be estimated as
g * = 1 2 g * ( 2 ) + g * ( 3 ) .
This method can also be used for improving the convergence of the sequence of self-similar approximants. Then, the role of datapoints f k is played by the self-similar approximants f k * ( x ) . In that case, all parameters a = a ( x ) , b = b ( x ) , c = c ( x ) , as well as A = A ( x ) and m = m ( x ) become control functions. This method of data extrapolation has been used for several problems, such as predictions for time series and convergence acceleration [27,96,97].

26. Self-Similar Diff-Log Approximants

There is a well known method employed in statistical physics called diff-log transformation [98,99]. This transformation for a function f ( x ) is
D ( x ) d d x ln f ( x ) .
The inverse transformation, assuming that the function f ( x ) is normalized so that
f ( 0 ) = 1 ,
reads as
f ( x ) = exp 0 x D ( t ) d t .
When one starts with an asymptotic expansion,
f k ( x ) = 1 + n = 1 k a n x n ,
the diff-log transformation gives
D k ( x ) = d d x ln f k ( x ) .
Expanding the latter in powers of x yields
D k ( x ) n = 0 k b n x n ( x 0 ) ,
with the coefficients b n expressed through a n . This expansion can be summed by one of the self-similar methods giving D k * ( x ) . Involving the inverse transformation (305) results in the self-similar diff-log approximants
f k * ( x ) = exp 0 x D k * ( t ) d t .
A number of applications of the diff-log transformation can be found in Refs. [61,83,99], where it is shown that the combination of the diff-log transform with self-similar approximants gives essentially more accurate results than the diff-log Padé method.

27. Critical Behavior

One says that a function f ( x ) experiences critical behavior at a critical point x c , when this function at that point either tends to zero or to infinity. It is possible to distinguish two cases, when the critical behavior occurs at infinity, and when at a finite critical point. These two cases are considered below separately.

27.1. Critical Point at Infinity

If the critical behavior happens at infinity, the considered function behaves as
f ( x ) B x β ( x ) .
Then, the diff-log transform tends to the form
D ( x ) β x ( x ) .
Here, B is a critical amplitude, while β is a critical exponent.
The critical exponents have a special interest for critical phenomena. If one is able to define a self-similar approximation f k * ( x ) directly to the studied function f ( x ) , then the critical exponent can be found from the limit
β k = lim x ln f k * ( x ) ln x .
Otherwise, it can be obtained from the equivalent form,
β k = lim x x D k * ( x ) ,
where a self-similar approximation for the diff-log transform D k * ( x ) is needed.
The convenience of using the representation (313) is in the possibility of employing a larger arsenal of different self-similar approximants. Of course, the factor approximants can be involved in both the cases. However, the root and nested approximants require the knowledge of the large-variable exponent of the sought function, which is not always available. On the contrary, the large-variable behavior of the diff-log transform (311) is known. Therefore, for constructing a self-similar approximation for the diff-log transform, one can resort to any type of self-similar approximant.
It is necessary to mention that the root and nested approximants can be defined, without knowing the large-variable behavior, by invoking optimization conditions of Section 6 prescribing the value of the external exponent m k , as is explained in Ref. [33]. However, this method becomes rather cumbersome for high-order approximants.

27.2. Finite Critical Point

If the critical point is located at a finite x c that is in the interval ( 0 , ) , then
f ( x ) B ( x c x ) β ( x x c 0 ) .
Here, the diff-log transform behaves as
D ( x ) β x c x ( x x c 0 ) .
Again, the critical exponent can be derived from the limit
β k = lim x x c 0 ln f k * ( x ) ln ( x c x ) ,
provided a self-similar approximant f k * ( x ) is constructed. However, it may happen that the other form
β k = lim x x c 0 ( x x c ) D k * ( x )
is more convenient, where a self-similar approximant for the diff-log transform D k * ( x ) is easier to find. This is because the nearest to zero pole of D k * ( x ) defines a critical point x c , while the residue (317) yields a critical exponent.
Note that, by the change of the variable, the problem of a finite critical point can be reduced to the case of critical behavior at infinity. For instance, one can use the change of the variable z = x / ( x c x ) or any other change of the variable mapping the interval [ 0 , x c ) to [ 0 , ) . Numerous examples of applying the diff-log transform, accompanied by the use of self-similar approximants, are presented in Refs. [61,83,99], where it is also shown that this method essentially outperforms the diff-log Padé variant.

28. Non-Power-Law Behavior

In the previous sections, a kind of power-law behavior of considered functions at large variables were kept in mind. Now, it is useful to make some comments on the use of the described approximation methods for other types of behavior. The most often met types of behavior that can occur at large variables are the exponential and logarithmic behavior. Below, it is shown that the developed methods of self-similar approximants can be straightforwardly applied to any type of behavior.

28.1. Exponential Behavior

The exponential behavior with respect to time happens in many mathematical models employed for describing the growth of population, mass of biosystems, economic expansion, financial markets, various relaxation phenomena, etc. [100,101,102,103,104,105].
When a sought function at a large variable displays exponential behavior, there are several ways of treating this case. First of all, this kind of behavior can be treated by self-similar exponential approximants of Section 18. The other way is to resort to diff-log approximants of Section 22 or, simply, to consider the logarithmic transform
L ( x ) ln f ( x ) .
If the sought function at large variable behaves as
f ( x ) B exp ( γ x ) ( x ) ,
then
L ( x ) γ x , D ( x ) γ ( x ) .
Therefore, the function behaves as
f ( x ) B exp { L ( x ) } ( x ) .
Keeping in mind the asymptotic series (306), one has:
L k ( x ) = ln f k ( x ) ,
which can be expanded in powers of x giving
L k ( x ) n = 0 k c n x n ( x 0 ) .
This is to be converted into a self-similar approximant L k * ( x ) , after which one obtains the answer:
f k * ( x ) = exp { L k * ( x ) } .
Moreover, the small-variable expansion of an exponential function can be directly and exactly represented through self-similar factor approximants [91]. Really, let us consider the exponential function
f ( x ) = e x .
Assume that one knows solely the small-variable asymptotic expansion
f k ( x ) = n = 0 k x n n ! ( x 0 ) ,
which is used for constructing factor approximants. In the lowest, second, order, one has:
f 2 * ( x ) = lim A 0 ( 1 + A x ) 1 / A = e x .
In the third order, one finds:
f 3 * ( x ) = lim A 0 ( 1 + x ) A / ( 1 A ) ( 1 + A x ) 1 / A ( 1 A ) = e x ,
and, similarly, in all other orders. Thus, the self-similar factor approximants of all orders reproduce the exponential function exactly:
f k * ( x ) = e x ( k 2 ) .
Some other more complicated functions, containing exponentials, can also be well approximated by factor approximants [106].

28.2. Logarithmic Behavior

When there is suspicion that the sought function exhibits logarithmic behavior at large variables, it is reasonable to act by analogy with the previous subsection, but now defining the exponential transform
E ( x ) exp { f ( x ) } .
For the asymptotic series (306), one has:
E k ( x ) exp { f k ( x ) } ,
whose expansion in powers of x produces
E k ( x ) = n = 0 k b n x n ( x 0 ) .
This can be converted into a self-similar approximation E k * ( x ) , so that the final answer becomes:
f k * ( x ) = ln E k * ( x ) .
As an example, let us consider the function
f ( x ) = 1 + ln 1 + 1 + x 2 ,
with the logarithmic behavior at large variables,
f ( x ) 0.5 ln x ( x ) .
This function has the expansion
f k ( x ) = n = 0 k a n x n ( x 0 ) ,
with the coefficients
a 0 = 1 , a 1 = 1 4 , a 2 = 3 32 , a 3 = 5 96 ,
a 4 = 35 1024 , a 5 = 63 2560 , a 6 = 77 4096 , .
Its exponential transform leads to the series (330), with the coefficients
b 0 = e , b 1 = 1 4 e , b 2 = 1 16 e , b 3 = 1 32 e ,
b 4 = 5 256 e , b 5 = 7 512 e , b 6 = 21 2048 e , .
Defining factor approximants E k * ( x ) , one obtains the approximants (331), whose large-variable behavior is of correct logarithmic form:
f k * ( x ) B k ln x ( x ) ,
with the amplitudes B k ,
B 2 = 0.333 , B 4 = 0.4 , B 6 = 0.429 ,
B 8 = 0.444 , B 10 = 0.456 , B 12 = 0.462 , ,
converging to the exact value 0.5 .

29. Critical Temperature Shift

Here, it is shown how the described methods can be used for calculating the critical temperature relative shift caused by interactions in an N-component scalar field theory in three dimensions. The interactions can be characterized by the gas parameter
γ ρ 1 / 3 a s ,
in which ρ is particle density and a s , s-wave scattering length. This shift is defined as
Δ T c T 0 T c T 0 T 0 ,
where T 0 is the critical temperature in the free field with γ = 0 , while T c is the critical temperature for nonzero γ . For example, the critical temperature of the 2-component free field
T 0 = 2 π m ρ ζ ( 3 / 2 ) 2 / 3 ,
is the point of the Bose–Einstein condensation of ideal gas. Here, m is the mass of a boson, and the Boltzmann and Planck constants are set to one. For weak interactions, the temperature shift has been shown [107,108] to have the form
Δ T c T 0 c 1 γ ( γ 0 ) ,
where the coefficient c 1 needs to be calculated.
This coefficient can be found in the loop expansion [109,110,111] producing asymptotic series in powers of the variable
x = ( N + 2 ) λ e f f μ e f f ,
where N is the number of components, λ e f f , effective coupling, and μ e f f , effective chemical potential. The series in seven loops reads as
c 1 ( x ) n = 1 5 a n x n ( x 0 ) ,
whose coefficients for several N are listed in Table 1.
However, at the critical point, the effective chemical potential tends to zero, hence the variable x tends to infinity. Thus, one comes to the necessity of finding the series (337) for x . The direct application of the limit x to this series of course has no sense. The self-similar factor approximants of Section 19 are used here defining the approximants f k * ( x ) for c 1 ( x ) , with keeping in mind that c 1 is finite, so that β k = 0 . Then, the approximants for the sought limit are
f k * ( ) = a 1 i = 1 N k A i n i c 1 .
The convergence is accelerated by quadratic splines, as is explained in Section 21 and in Refs. [97,112]. The results are displayed in Table 2, where they are compared with Monte Carlo simulations [113,114,115,116]. The agreement of the latter with the values calculated by means of the self-similar approximants is very good.

30. Critical Exponents

Calculation of critical exponents is one of the most important problems in the theory of phase transitions. Here, it is shown how the critical exponents can be calculated by using self-similar factor approximants applied to the asymptotic series in powers of the ε = 4 d , where d is space dimensionality. The O ( N ) φ 4 field theory in d = 3 is considered. The definition of the critical exponents can be found in reviews [27,117].
One usually derives the so-called epsilon expansions for the exponents η , ν 1 , and ω . The other exponents can be obtained from the scaling relations:
α = 2 ν d , β = ν 2 ( d 2 + η ) , γ = ν ( 2 η ) , δ = d + 2 η d 2 + η .
In three dimensions, one has
α = 2 3 ν , β = ν 2 ( 1 + η ) , γ = ν ( 2 η ) , δ = 5 η 1 + η ( d = 3 ) .
The number of components N corresponds to different physical systems. Thus, N = 0 corresponds to dilute polymer solutions, N = 1 , to the Ising universality class, N = 2 , to superfluids and the so-called X Y magnetic models, N = 3 , to the Heisenberg universality class, and N = 4 , to some models of quantum field theory. Formally, it is admissible to study arbitrary N.
In the case of N = 2 , the critical exponents for any d are known exactly:
α = 1 2 , β = 1 4 , γ = 1 ,
δ = 5 , η = 0 , ν = 1 2 ( N = 2 ) .
For the limit N , the exact exponents are also available:
α = d 4 d 2 , β = 1 2 , γ = 2 d 2 , δ = d + 4 d 2 ,
η = 0 , ν = 1 d 2 , ω = 4 d ( N ) .
The latter for d = 3 reduce to
α = 1 , β = 1 2 , γ = 2 , δ = 5 ,
η = 0 , ν = 1 , ω = 1 ( d = 3 , N ) .
The epsilon expansion results in the series
f k ( ε ) = n = 0 k c n ε n ( ε 0 ) ,
obtained for ε 0 , while, in the end, one has to set ε = 1 . Direct substitution of ε = 1 in the series (348) leads to the values having little to do with real exponents. These series require to define their effective sums, which are accomplished by means of the self-similar factor, approximants
f k * ( ε ) = f 0 ( ε ) i = 1 N k ( 1 + A i ε ) n i .
Then, let us set ε = 1 and define the final answer as the half sum of the last two factor approximants f k ( 1 ) and f k 1 ( 1 ) .
Let us first illustrate the procedure for the O ( 1 ) field theory of the Ising universality class, where there exist the most accurate numerical calculations of the exponents, obtained by Monte Carlo simulations [117,118,119,120,121]. The epsilon expansions for η , ν 1 , and ω can be written [122] as
η 0.0185185 ε 2 + 0.01869 ε 3 0.00832877 ε 4 + 0.0256565 ε 5 ,
ν 1 2 0.333333 ε 0.117284 ε 2 + 0.124527 ε 3 0.30685 ε 4 0.95124 ε 5 ,
ω ε 0.62963 ε 2 + 1.61822 ε 3 5.23514 ε 4 + 20.7498 ε 5 .
If here ε = 1 is set, one gets senseless values η = 0.0545 , ν = 2.4049 and ω = 17.5033 . However, by means of the self-similar factor approximants, one obtains the results shown in Table 3, which are in good agreement with Monte Carlo simulations [117,118,119,120,121].
The use of the self-similar factor approximants can be extended to the calculation of the critical exponents for the arbitrary number of components N of the O ( N ) symmetric φ 4 field theory in d = 3 . In the general case, the epsilon expansions [122] read as
η ( N + 2 ) ε 2 2 ( N + 8 ) 2 { 1 + ε 4 ( N + 8 ) 2 [ N 2 + 56 N + 272 ] ε 2 16 ( N + 8 ) 4 5 N 4 + 230 N 3 1124 N 2 17920 N 46144 + 384 ζ ( 3 ) ( N + 8 ) ( 5 N + 22 ) ε 3 64 ( N + 8 ) 6 [ 13 N 6 + 946 N 5 + 27620 N 4 + 121472 N 3 262528 N 2 2912768 N 5655552 16 ζ ( 3 ) ( N + 8 ) N 5 + 10 N 4 + 1220 N 3 1136 N 2 68672 N 171264 + + 1152 ζ ( 4 ) ( N + 8 ) 3 ( 5 N + 22 ) 5120 ζ ( 5 ) ( N + 8 ) 2 ( 2 N 2 + 55 N + 186 ) ] } ,
ν 1 2 + ( N + 2 ) ε N + 8 { 1 ε 2 ( N + 8 ) 2 [ 13 N + 44 ] + + ε 2 8 ( N + 8 ) 4 3 N 3 452 N 2 2672 N 5312 + 96 ζ ( 3 ) ( N + 8 ) ( 5 N + 22 ) + + ε 3 32 ( N + 8 ) 6 [ 3 N 5 + 398 N 4 12900 N 3 81552 N 2 219968 N 357120 + + 16 ζ ( 3 ) ( N + 8 ) 3 N 4 194 N 3 + 148 N 2 + 9472 N + 19488 + 288 ζ ( 4 ) ( N + 8 ) 3 ( 5 N + 22 ) 1280 ζ ( 5 ) ( N + 8 ) 2 2 N 2 + 55 N + 186 + + ε 4 128 ( N + 8 ) 8 3 N 7 1198 N 6 27484 N 5 1055344 N 4 5242112 N 3 5256704 N 2 + + 6999040 N 626688 16 ζ ( 3 ) ( N + 8 ) 13 N 6 310 N 5 + 19004 N 4 + 102400 N 3 381536 N 2 2792576 N 4240640 ) 1024 ζ 2 ( 3 ) ( N + 8 ) 2 2 N 4 + 18 N 3 + 981 N 2 + 6994 N + 11688 + + 48 ζ ( 4 ) ( N + 8 ) 3 3 N 4 194 N 3 + 148 N 2 + 9472 N + 19488 + + 256 ζ ( 5 ) ( N + 8 ) 2 155 N 4 + 3026 N 3 + 989 N 2 66018 N 130608 6400 ζ ( 6 ) ( N + 8 ) 4 2 N 2 + 55 N + 186 + 56448 ζ ( 7 ) ( N + 8 ) 3 14 N 2 + 189 N + 256 ,
ω ε 3 ε 2 ( N + 8 ) 2 [ 3 N + 14 ] + + ε 3 4 ( N + 8 ) 4 33 N 3 + 538 N 2 + 4288 N + 9568 + 96 ζ ( 3 ) ( N + 8 ) ( 5 N + 22 ) + + ε 4 16 ( N + 8 ) 6 5 N 5 1488 N 4 46616 N 3 419528 N 2 1750080 N 2599552 96 ζ ( 3 ) ( N + 8 ) 63 N 3 + 548 N 2 + 1916 N + 3872 + + 288 ζ ( 4 ) ( N + 8 ) 3 ( 5 N + 22 ) 1920 ζ ( 5 ) ( N + 8 ) 2 2 N 2 + 55 N + 186 + + ε 5 64 ( N + 8 ) 8 13 N 7 + 7196 N 6 + 240328 N 5 + 3760776 N 4 + + 38877056 N 3 + 223778048 N 2 + 660389888 N + 752420864 16 ζ ( 3 ) ( N + 8 ) 9 N 6 1104 N 5 11648 N 4 243864 N 3 2413248 N 2 9603328 N 14734080 768 ζ 2 ( 3 ) ( N + 8 ) 2 6 N 4 + 107 N 3 + 1826 N 2 + 9008 N + 8736 288 ζ ( 4 ) ( N + 8 ) 3 63 N 3 + 548 N 2 + 1916 N + 3872 + + 256 ζ ( 5 ) ( N + 8 ) 2 305 N 4 + 7386 N 3 + 45654 N 2 + 143212 N + 226992 9600 ζ ( 6 ) ( N + 8 ) 4 2 N 5 + 55 N + 186 + + 112896 ζ ( 7 ) ( N + 8 ) 3 14 N 2 + 189 N + 256 .
.
Summing these series by means of the self-similar factor approximants [123,124], one obtains the exponents, presented in Table 4. The found values of the exponents are in good agreement with experimental data as well as with the results of numerical methods, such as Padé–Borel summation and Monte Carlo simulations. It is important to stress that, when the exact values of the exponents are known (for N = −2 and N → ∞), the self-similar approximants automatically reproduce these exact data.

31. Conclusions

In this review, the basic ideas of the approach allowing for obtaining sensible results from divergent asymptotic series typical of asymptotic perturbation theory are presented. The pivotal points of the approach can be emphasized as follows:
(i)
The implantation of control functions in the calculational procedure, treating perturbation theory as optimal control theory. Control functions are defined by optimization conditions in order to control the convergence of the sequence of optimized approximants. The optimization conditions are derived from the Cauchy criterion of sequence convergence. The resulting optimized perturbation theory provides good accuracy even for very short series of just a few terms and makes it possible to extrapolate the validity of perturbation theory to arbitrary values of variables, including the limit to infinity.
(ii)
Reformulation of perturbation theory to the language of dynamical theory, handling the motion from one approximation term to another as the motion in discrete time played by the approximation order. Then, the approximation sequence is bijective to the trajectory of the effective dynamical system, and the sequence limit is equivalent to the trajectory fixed point. The motion near the fixed point enjoys the property of functional self-similarity. The approximation dynamical system in discrete time is called cascade. The approximation cascade can be embedded into a dynamical system in continuous time termed approximation flow. The representation in the language of dynamical theory allows us to improve the accuracy of optimized perturbation theory, to study the procedure stability, and to select the best initial approximation.
(iii)
Introduction of control functions by means of a fractal transformation of asymptotic series, which results in the derivation of several types of self-similar approximants. These approximants combine the simplicity of their use with good accuracy. They can be employed for the problem of interpolation as well as extrapolation.
The application of the described methods is illustrated by several examples demonstrating the efficiency of the approach.

Author Contributions

V.I.Y. and E.P.Y. equally contributed to this review. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Poincaré, H. New Methods of Celestial Mechanics; American Institute of Physics: New York, NY, USA, 1993. [Google Scholar]
  2. Dingle, R.B. Asymptotic Expansions; Academic: London, UK, 1973. [Google Scholar]
  3. Nayfeh, A.H. Problems in Perturbation; Wiley: New York, NY, USA, 1985. [Google Scholar]
  4. O’Malley, R.E. Singular Perturbation Methods for Ordinary Differential Equations; Springer: New York, NY, USA, 1991. [Google Scholar]
  5. Krylov, N.; Bogolubov, N. Introduction to Non-Linear Mechanics; Princeton University: Princeton, NJ, USA, 1955. [Google Scholar]
  6. Bogolubov, N.N.; Mitropolsky, Y.A. Asymptotic Methods in the Theory of Nonlinear Oscillations; Gordon and Breach: New York, NY, USA, 1961. [Google Scholar]
  7. Grebennikov, E.A.; Ryabov, Y.A. Constructive Methods in the Analysis of Nonlinear Systems; Mir: Moscow, Russia, 1983. [Google Scholar]
  8. Sanders, J.; Verhulst, F. Averaging Methods in Nonlinear Dynamical Systems; Springer: New York, NY, USA, 1985. [Google Scholar]
  9. Hardy, G.H. Divergent Series; Clarendon: Oxford, UK, 1973. [Google Scholar]
  10. Van Dyke, M. Perturbation Methods in Fluid Mechanics; Academic: New York, NY, USA, 1964. [Google Scholar]
  11. Baker, G.A.; Graves-Moris, P. Padé Approximants; Cambridge University: Cambridge, UK, 1996. [Google Scholar]
  12. Honda, M. On perturbation theory improved by strong coupling expansion. J. High Energy Phys. 2014, 12, 19. [Google Scholar] [CrossRef] [Green Version]
  13. Baker, G.A.; Graves-Moris, P. The convergence of sequences of Padé approximants. J. Math. Anal. Appl. 1982, 87, 382–394. [Google Scholar] [CrossRef] [Green Version]
  14. Bender, C.; Mead, L.R.; Papanicolaou, N. Maximum entropy summation of divergent perturbation series. J. Math. Phys. 1987, 28, 1016–1018. [Google Scholar] [CrossRef]
  15. Simon, B. Fifty years of eigenvalue perturbation theory. Bull. Am. Math. Soc. 1991, 24, 303–319. [Google Scholar] [CrossRef]
  16. Bray, A.J.; McCarthy, T.; Moore, M.A.; Reger, J.D.; Young, A.P. Summability of perturbation expansions in disordered systems: Results for a toy model. Phys. Rev. B 1987, 36, 2212–2219. [Google Scholar] [CrossRef] [PubMed]
  17. Lewis, F.L. Optimal Control; Wiley: New York, NY, USA, 1986. [Google Scholar]
  18. Yukalov, V.I. Theory of perturbations with a strong interaction. Mosc. Univ. Phys. Bull. 1976, 31, 10–15. [Google Scholar]
  19. Yukalov, V.I. Model of a hybrid crystal. Theor. Math. Phys. 1976, 28, 652–660. [Google Scholar] [CrossRef]
  20. Yukalov, V.I. Quantum crystal with jumps of particles. Phys. A 1977, 89, 363–372. [Google Scholar] [CrossRef]
  21. Yukalov, V.I. Quantum theory of localized crystal. Ann. Phys. (Berlin) 1979, 491, 31–39. [Google Scholar] [CrossRef]
  22. Yukalov, V.I. Superharmonic approximation for crystal. Ann. Phys. (Berlin) 1980, 492, 171–182. [Google Scholar] [CrossRef]
  23. Yukalov, V.I. Construction of propagators for quantum crystals. Ann. Phys. (Berlin) 1981, 493, 419–433. [Google Scholar] [CrossRef]
  24. Yukalov, V.I.; Zubov, V.I. Localized-particles approach for classical and quantum crystals. Fortschr. Phys. 1983, 31, 627–672. [Google Scholar] [CrossRef]
  25. Yukalov, V.I. Theory of melting and crystallization. Phys. Rev. B 1985, 32, 436–446. [Google Scholar] [CrossRef] [PubMed]
  26. Yukalov, V.I.; Yukalova, E.P. Self-similar structures and fractal transforms in approximation theory. Chaos Solit. Fract. 2002, 14, 839–861. [Google Scholar] [CrossRef] [Green Version]
  27. Yukalov, V.I. Interplay between approximation theory and renormalization group. Phys. Part. Nucl. 2019, 50, 141–209. [Google Scholar] [CrossRef] [Green Version]
  28. Dineykhan, M.; Efimov, G.V.; Gandbold, G.; Nedelko, S.N. Oscillator Representation in Quantum Physics; Springer: Berlin, Germany, 1995. [Google Scholar]
  29. Sissakian, A.N.; Solovtsov, I.L. Variational expansions in quantum chromodynamics. Phys. Part. Nucl. 1999, 30, 1057–1119. [Google Scholar] [CrossRef]
  30. Feranchuk, I.; Ivanov, A.; Le, V.H.; Ulyanenkov, A. Nonperturbative Description of Quantum Systems; Springer: Cham, Switzerland, 2015. [Google Scholar] [CrossRef]
  31. Kleinert, H. Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets; World Scientific: Singapore, 2004. [Google Scholar] [CrossRef] [Green Version]
  32. Kleinert, H.; Yukalov, V.I. Self-similar variational perturbation theory for critical exponents. Phys. Rev. E 2005, 71, 026131. [Google Scholar] [CrossRef] [Green Version]
  33. Yukalov, V.I.; Gluzman, S. Critical indices as limits of control functions. Phys. Rev. Lett. 1997, 79, 333–336. [Google Scholar] [CrossRef] [Green Version]
  34. Gluzman, S.; Yukalov, V.I. Algebraic self-similar renormalization in the theory of critical phenomena. Phys. Rev. E 1997, 55, 3983–3999. [Google Scholar] [CrossRef] [Green Version]
  35. Yukalov, V.I.; Gluzman, S. Self-similar bootstrap of divergent series. Phys. Rev. E 1997, 55, 6552–6565. [Google Scholar] [CrossRef] [Green Version]
  36. Kadanoff, L.P.; Byam, G. Quantum Statistical Mechanics; Benjamin: New York, NY, USA, 1962. [Google Scholar]
  37. Yukalov, V.I. Statistical Green’s Functions; Queen’s University: Kingston, ON, Canada, 1998. [Google Scholar]
  38. Yukalov, V.I. Destiny of optical lattices with strong intersite interactions. Laser Phys. 2020, 30, 015501. [Google Scholar] [CrossRef]
  39. Yukalov, V.I. Statistical systems with nonintegrable interaction potentials. Phys. Rev. E 2016, 94, 012106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Buckley, I.R.C.; Duncan, A.; Jones, H.F. Proof of the convergence of the linear δ expansion: Zero dimensions. Phys. Rev. D 1993, 47, 2554–2559. [Google Scholar] [CrossRef] [PubMed]
  41. Bender, C.M.; Duncan, A.; Jones, H.F. Convergence of the optimized expansion for the connected vacuum amplitude: Zero dimensions. Phys. Rev. D 1994, 49, 4219–4225. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Hioe, F.T.; MacMillen, D.; Montroll, E.W. Quantum theory of anharmonic oscillators: Energy levels of a single and a pair of coupled oscillators with quartic coupling. Phys. Rep. 1978, 43, 305–335. [Google Scholar] [CrossRef]
  43. Duncan, A.; Jones, H.F. Convergence proof for optimized expansion: Anharmonic oscillator. Phys. Rev. D 1993, 47, 2560–2572. [Google Scholar] [CrossRef] [PubMed]
  44. Guida, R.; Konishi, K.; Suzuki, H. Convergence of scaled δ expansion: Anharmonic oscillator. Ann. Phys. 1995, 241, 152–184. [Google Scholar] [CrossRef] [Green Version]
  45. Bogolubov, N.N. Lectures on Quantum Statistics; Gordon and Breach: New York, NY, USA, 1967; Volume 1. [Google Scholar]
  46. Bogolubov, N.N. Lectures on Quantum Statistics; Gordon and Breach: New York, NY, USA, 1970; Volume 2. [Google Scholar]
  47. Lieb, E.H.; Seiringer, R.; Solovej, J.P.; Yngvason, J. The Mathematics of the Bose Gas and Its Condensation; Birkhäuser: Basel, Switzerland, 2005. [Google Scholar] [CrossRef] [Green Version]
  48. Letokhov, V. Laser Control of Atoms and Molecules; Oxford University: New York, NY, USA, 2007. [Google Scholar]
  49. Pethick, C.J.; Smith, H. Bose–Einstein Condensation in Dilute Gas; Cambridge University: Cambridge, UK, 2008. [Google Scholar] [CrossRef]
  50. Yukalov, V.I. Basics of Bose–Einstein condensation. Phys. Part. Nucl. 2011, 42, 460–513. [Google Scholar] [CrossRef] [Green Version]
  51. Bogolubov, N.N. Quantum Statistical Mechanics; World Scientific: Singapore, 2014. [Google Scholar] [CrossRef] [Green Version]
  52. Yukalov, V.I. Theory of cold atoms: Bose–Einstein statistics. Laser Phys. 2016, 26, 062001. [Google Scholar] [CrossRef] [Green Version]
  53. Courteille, P.W.; Bagnato, V.S.; Yukalov, V.I. Bose–Einstein condensation of trapped atomic gases. Laser Phys. 2001, 11, 659–800. [Google Scholar]
  54. Yukalov, V.I.; Yukalova, E.P.; Bagnato, V.S. Spectrum of coherent modes for trapped Bose gas. Laser Phys. 2002, 12, 1325–1331. [Google Scholar]
  55. Yukalov, V.I.; Yukalova, E.P. Degenerate trajectories and Hamiltonian envelopes in the method of self-similar approximations. Can. J. Phys. 1993, 71, 537–546. [Google Scholar] [CrossRef]
  56. Seznec, R.R.; Zinn-Justin, J. Summation of divergent series by order dependent mappings: Application to the anharmonic oscillator and critical exponents in field theory. J. Math. Phys. 1979, 20, 1398–1408. [Google Scholar] [CrossRef]
  57. Guida, R.; Konishi, K.; Suzuki, H. Improved convergence proof of the delta expansion and order dependent mappings. Ann. Phys. 1996, 249, 109–145. [Google Scholar] [CrossRef] [Green Version]
  58. Aoyama, T.; Matsuo, T.; Shibusa, Y. Improved Taylor expansion method in the Ising model. Prog. Theor. Phys. 2006, 115, 473–486. [Google Scholar] [CrossRef] [Green Version]
  59. Gluzman, S.; Yukalov, V.I. Self-similarly corrected Padé approximants for the indeterminate problem. Eur. Phys. J. Plus 2016, 131, 340. [Google Scholar] [CrossRef] [Green Version]
  60. Gluzman, S.; Yukalov, V.I. Self-similarly corrected Padé approximants for nonlinear equations. Int. J. Mod. Phys. B 2019, 33, 1950353. [Google Scholar] [CrossRef]
  61. Gluzman, S. Padé and post-Padé approximations for critical phenomena. Symmetry 2020, 12, 1600. [Google Scholar] [CrossRef]
  62. Wellenhofer, C.; Phillips, D.R.; Schwenk, A. From weak to strong: Constrained extrapolation of perturbation series with applications to dilute Fermi systems. Phys. Rev. Res. 2020, 2, 043372. [Google Scholar] [CrossRef]
  63. Yukalov, V.I. Statistical mechanics of strongly nonideal systems. Phys. Rev. A 1990, 42, 3324–3334. [Google Scholar] [CrossRef] [PubMed]
  64. Yukalov, V.I. Self-similar approximations for strongly interacting systems. Phys. A 1990, 167, 833–860. [Google Scholar] [CrossRef]
  65. Yukalov, V.I. Method of self-similar approximations. J. Math. Phys. 1991, 32, 1235–1239. [Google Scholar] [CrossRef]
  66. Yukalov, V.I. Stability conditions for method of self-similar approximations. J. Math. Phys. 1992, 33, 3994–4001. [Google Scholar] [CrossRef]
  67. Yukalov, V.I.; Yukalova, E.P. Temporal dynamics in perturbation theory. Phys. A 1996, 225, 336–362. [Google Scholar] [CrossRef] [Green Version]
  68. Pietsch, A. Approximation spaces. J. Approx. Theory 1981, 32, 115–134. [Google Scholar] [CrossRef]
  69. Walker, J.A. Dynamical Systems and Evolution Equations; Plenum: New York, NY, USA, 1980. [Google Scholar]
  70. Hale, J.K. Asymptotic Behavior of Dissipative Systems; American Mathematical Society: Providence, RI, USA, 1988. [Google Scholar]
  71. Yukalov, V.I.; Yukalova, E.P. Self-similar perturbation theory. Ann. Phys. (N.Y.) 1999, 277, 219–254. [Google Scholar] [CrossRef] [Green Version]
  72. Ott, E. Strange attractors and chaotic motions of dynamical systems. Rev. Mod. Phys. 1981, 53, 655–672. [Google Scholar] [CrossRef]
  73. Schuster, H.G. Deterministic Chaos; VCH: Weinheim, Germany, 1989. [Google Scholar]
  74. Yukalov, V.I.; Yukalova, E.P. Self-similar approximations for thermodynamic potentials. Phys. A 1993, 198, 573–592. [Google Scholar] [CrossRef]
  75. Yukalov, V.I.; Yukalova, E.P. Higher orders of self-similar approximations for thermodynamic potentials. Phys. A 1994, 206, 553–580. [Google Scholar] [CrossRef]
  76. Paladin, G.; Vulpiani, A. Anomalous scaling laws in multifractal objects. Phys. Rep. 1987, 156, 147–225. [Google Scholar] [CrossRef]
  77. Kröger, H. Fractal geometry in quantum mechanics, field theory and spin systems. Phys. Rep. 2000, 323, 81–181. [Google Scholar] [CrossRef]
  78. Barnsley, M.F. Superfractals; Cambridge University: Cambridge, UK, 2006. [Google Scholar]
  79. Barnsley, M.F. Fractal Transform; AK Peters: Natick, MA, USA, 1994. [Google Scholar]
  80. Yukalov, V.I.; Yukalova, E.P.; Gluzman, S. Self-similar interpolation in quantum mechanics. Phys. Rev. A 1998, 58, 96–115. [Google Scholar] [CrossRef] [Green Version]
  81. Gluzman, S.; Yukalov, V.I. Unified approach to crossover phenomena. Phys. Rev. E 1999, 58, 4197–4209. [Google Scholar] [CrossRef] [Green Version]
  82. Yukalov, V.I.; Gluzman, S. Self-similar crossover in statistical physics. Phys. A 1999, 273, 401–415. [Google Scholar] [CrossRef] [Green Version]
  83. Yukalov, V.I.; Yukalova, E.P.; Gluzman, S. Extrapolation and interpolation of asymptotic series by self-similar approximants. J. Math. Chem. 2010, 47, 959–983. [Google Scholar] [CrossRef] [Green Version]
  84. Yukalov, V.I.; Gluzman, S. Self-similar interpolation in high-energy physics. Phys. Rev. D 2015, 91, 125023. [Google Scholar] [CrossRef] [Green Version]
  85. Gluzman, S.; Yukalov, V.I. Self-similar continued root approximants. Phys. Lett. A 2012, 377, 124–128. [Google Scholar] [CrossRef] [Green Version]
  86. Yukalov, V.I.; Gluzman, S. Self-similar exponential approximants. Phys. Rev. E 1998, 58, 1359–1382. [Google Scholar] [CrossRef] [Green Version]
  87. Lang, S. Algebra; Addison-Wesley: Reading, MA, USA, 1984. [Google Scholar]
  88. Yukalov, V.I.; Gluzman, S.; Sornette, D. Summation of power series by self-similar factor approximants. Phys. A 2003, 328, 409–438. [Google Scholar] [CrossRef] [Green Version]
  89. Gluzman, S.; Yukalov, V.I.; Sornette, D. Self-similar factor approximants. Phys. Rev. E 2003, 67, 026109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  90. Yukalov, V.I.; Gluzman, S. Extrapolation of power series by self-similar factor and root approximants. Int. J. Mod. Phys. B 2004, 18, 3027–3046. [Google Scholar] [CrossRef] [Green Version]
  91. Yukalov, V.I.; Yukalova, E.P. Method of self-similar factor approximants. Phys. Lett. A 2007, 368, 341–347. [Google Scholar] [CrossRef] [Green Version]
  92. Yukalov, V.I.; Yukalova, E.P. Self-similar extrapolation of nonlinear problems from small-variable to large-variable limit. Int. J. Mod. Phys. B 2020, 34, 2050208. [Google Scholar] [CrossRef]
  93. Gluzman, S.; Yukalov, V.I. Self-similar extrapolation from weak to strong coupling. J. Math. Chem. 2010, 48, 883–913. [Google Scholar] [CrossRef] [Green Version]
  94. Weinberg, S. The Quantum Theory of Fields; Cambridge University: Cambridge, UK, 2005. [Google Scholar]
  95. Abhignan, V.; Sankaranarayanan, R. Continued functions and perturbation series: Simple tools for convergence of diverging series in O(n) - symmetric φ4 field theory at weak coupling limit. J. Stat. Phys. 2021, 183, 4. [Google Scholar] [CrossRef]
  96. Yukalov, V.I. Self-similar approach to market analysis. Eur. Phys. J. B 2001, 20, 609–617. [Google Scholar] [CrossRef] [Green Version]
  97. Yukalov, V.I.; Yukalova, E.P. Bose–Einstein condensation temperature of weakly interacting atoms. Laser Phys. Lett. 2017, 14, 073001. [Google Scholar] [CrossRef]
  98. He, H.X.; Hamer, C.J.; Oitmaa, J. High-temperature series expansions for the (2 + 1)-dimensional Ising model. J. Phys. A 1990, 23, 1775–1788. [Google Scholar] [CrossRef]
  99. Gluzman, S.; Yukalov, V.I. Critical indices from self-similar root approximants. Eur. Phys. J. Plus 2017, 132, 535. [Google Scholar] [CrossRef] [Green Version]
  100. Zeide, B. Analysis of growth equations. For. Sci. 1993, 39, 594–616. [Google Scholar] [CrossRef]
  101. Day, T.; Taylor, P.D. Von Bertalanffy’s growth equation should not be used to model age and size at maturity. Am. Nat. 1997, 149, 381–393. [Google Scholar] [CrossRef]
  102. Yukalov, V.I.; Yukalova, E.P.; Sornette, D. Extreme events in population dynamics with functional carrying capacity. Eur. Phys. J. Spec. Top. 2012, 205, 313–354. [Google Scholar] [CrossRef] [Green Version]
  103. Yukalov, V.I.; Yukalova, E.P.; Sornette, D. Population dynamics with nonlinear delayed carrying capacity. Int. J. Bifur. Chaos 2014, 24, 1450021. [Google Scholar] [CrossRef] [Green Version]
  104. Yukalov, V.I.; Yukalova, E.P.; Sornette, D. Dynamical system theory of periodically collapsing bubbles. Eur. Phys. J. B 2015, 88, 179. [Google Scholar] [CrossRef] [Green Version]
  105. Gluzman, S. Nonlinear approximations to critical and relaxation processes. Axioms 2020, 9, 126. [Google Scholar] [CrossRef]
  106. Yukalov, V.I.; Yukalova, E.P. Self-similar extrapolation in quantum field theory. Phys. Rev. D 2021, 103, 076019. [Google Scholar] [CrossRef]
  107. Baym, G.; Blaizot, J.P.; Holzmann, M.; Laloö, F.; Vautherin, D. The transition temperature of the dilute interacting Bose gas. Phys. Rev. Lett. 1999, 83, 1703–1706. [Google Scholar] [CrossRef] [Green Version]
  108. Baym, G.; Blaizot, J.P.; Zinn-Justin, J. The transition temperature of the dilute interacting Bose gas for N internal states. Eur. Phys. Lett. 2000, 49, 150–155. [Google Scholar] [CrossRef] [Green Version]
  109. Kastening, B. Shift of BEC temperature of homogeneous weakly interacting Bose gas. Laser Phys. 2004, 14, 586–590. [Google Scholar]
  110. Kastening, B. Bose–Einstein condensation temperature of a homogeneous weakly interacting Bose gas in variational perturbation theory through seven loops. Phys. Rev. A 2004, 69, 043613. [Google Scholar] [CrossRef] [Green Version]
  111. Kastening, B. Nonuniversal critical quantities from variational perturbation theory and their application to the Bose–Einstein condensation temperature shift. Phys. Rev. A 2004, 70, 043621. [Google Scholar] [CrossRef] [Green Version]
  112. Yukalov, V.I.; Yukalova, E.P. Critical temperature in weakly interacting multicomponent field theory. Eur. Phys. J. Web Conf. 2017, 138, 03011. [Google Scholar] [CrossRef] [Green Version]
  113. Kashurnikov, A.A.; Prokof’ev, N.; Svistunov, B. Critical temperature shift in weakly interacting Bose gas. Phys. Rev. Lett. 2001, 87, 120402. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  114. Arnold, P.; Moore, G. BEC transition temperature of a dilute homogeneous imperfect Bose gas. Phys. Rev. Lett. 2001, 87, 120401. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  115. Arnold, P.; Moore, G. Monte Carlo simulation of O(2)φ4 field theory in three dimensions. Phys. Rev. E 2001, 64, 066113. [Google Scholar] [CrossRef] [PubMed]
  116. Sun, X. Monte Carlo studies of three-dimensional O(1) and O(4)φ4 theory related to BEC phase transition temperatures. Phys. Rev. E 2003, 67, 066702. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  117. Pelissetto, A.; Vicari, E. Critical phenomena and renormalization-group theory. Phys. Rep. 2002, 368, 549–727. [Google Scholar] [CrossRef] [Green Version]
  118. Deng, Y.; Blöte, H.W. Simultaneous analysis of several models in the three-dimensional Ising universality class. Phys. Rev. E 2003, 68, 036125. [Google Scholar] [CrossRef] [Green Version]
  119. Campostrini, M.; Hasenbusch, M.; Pelissetto, A.; Vicari, E. Theoretical estimates of the critical exponents of the superfluid transition in 4He by lattice methods. Phys. Rev. B 2006, 74, 144506. [Google Scholar] [CrossRef] [Green Version]
  120. Hasenbusch, M. Finite size scaling study of lattice models in the three-dimensional Ising universality class. Phys. Rev. B 2010, 82, 174433. [Google Scholar] [CrossRef] [Green Version]
  121. Ferrenberg, A.; Xu, J.; Landau, D.P. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model. Phys. Rev. E 2018, 97, 043301. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  122. Kleinert, H.; Schulte-Frohlinde, V. Critical Properties of φ4–Theories; World Scientific: Singapore, 2001. [Google Scholar]
  123. Yukalov, V.I.; Yukalova, E.P. Calculation of critical exponents by self-similar factor approximants. Eur. Phys. J. B 2007, 55, 93–99. [Google Scholar] [CrossRef] [Green Version]
  124. Yukalov, V.I.; Yukalova, E.P. Describing phase transitions in field theory by self-similar approximants. Eur. Phys. J. Web Conf. 2019, 204, 02003. [Google Scholar] [CrossRef]
Table 1. Coefficients a n of the loop expansion c 1 ( x ) for the number of components N.
Table 1. Coefficients a n of the loop expansion c 1 ( x ) for the number of components N.
N01234
a 1 0.1116430.1116430.1116430.1116430.111643
a 2 −0.0264412−0.0198309−0.0165258−0.0145427−0.0132206
a 3 0.00862150.004806870.003305740.002535040.0020754
a 4 −0.0034786−0.00143209−0.000807353−0.000536123−0.000392939
a 5 0.001640290.000495610.0002278350.0001303980.0000852025
Table 2. Critical temperature shift obtained using self-similar factor approximants, as compared with Monte Carlo simulations.
Table 2. Critical temperature shift obtained using self-similar factor approximants, as compared with Monte Carlo simulations.
N c 1 Monte Carlo
00.77 ± 0.03
11.06 ± 0.051.09 ± 0.09[116]
21.29 ± 0.071.29 ± 0.05[113]
1.32 ± 0.02[114,115]
31.46 ± 0.08
41.60 ± 0.091.60 ± 0.10[116]
Table 3. Critical exponents for the O ( 1 ) -symmetric φ 4 field theory of the Ising universality class calculated using self-similar factor approximants, as compared with Monte Carlo simulations.
Table 3. Critical exponents for the O ( 1 ) -symmetric φ 4 field theory of the Ising universality class calculated using self-similar factor approximants, as compared with Monte Carlo simulations.
Factor ApproximantsMonte Carlo
α 0.106450.11026
β 0.326190.32630
γ 1.241171.23708
δ 4.805024.79091
η 0.033590.03611
ν 0.631180.62991
ω 0.787550.83000
Table 4. Critical exponents for the O ( N ) -symmetric φ 4 field theory obtained by the summation of ε expansions using self-similar factor approximants.
Table 4. Critical exponents for the O ( N ) -symmetric φ 4 field theory obtained by the summation of ε expansions using self-similar factor approximants.
Nαβγδηνω
−20.50.251500.50.79838
−10.366120.277421.07914.88970.018740.544630.79380
0−0.23466 0.302681.16004.83230.028750.588450.79048
1−0.10645 0.326191.24124.80500.033590.631180.78755
2−0.01650 0.347991.32054.79470.035420.672170.78763
3−0.13202 0.367971.39614.79400.035560.710680.78904
4−0.23835 0.386031.46634.79850.034760.746120.79133
5−0.33436 0.402081.53024.80570.033470.778120.79419
6−0.41963 0.416161.58734.81420.031970.806540.79747
7−0.49436 0.428361.63764.82310.03038 0.831450.80108
8−0.55920 0.438821.68164.83200.028810.853070.80503
9−0.615060.447741.71964.84060.027290.871690.80935
10−0.662970.455301.75244.84890.025840.887660.81408
50−0.983530.501131.98134.95370.007790.994510.93176
100−0.936430.490011.95644.99260.001230.978810.97201
1000−0.995280.499331.99664.99860.000230.998420.99807
10,000−0.999520.499931.99974.99990.000020.999840.99979
−10.525011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yukalov, V.I.; Yukalova, E.P. From Asymptotic Series to Self-Similar Approximants. Physics 2021, 3, 829-878. https://doi.org/10.3390/physics3040053

AMA Style

Yukalov VI, Yukalova EP. From Asymptotic Series to Self-Similar Approximants. Physics. 2021; 3(4):829-878. https://doi.org/10.3390/physics3040053

Chicago/Turabian Style

Yukalov, Vyacheslav I., and Elizaveta P. Yukalova. 2021. "From Asymptotic Series to Self-Similar Approximants" Physics 3, no. 4: 829-878. https://doi.org/10.3390/physics3040053

APA Style

Yukalov, V. I., & Yukalova, E. P. (2021). From Asymptotic Series to Self-Similar Approximants. Physics, 3(4), 829-878. https://doi.org/10.3390/physics3040053

Article Metrics

Back to TopTop