Next Article in Journal
On the Existence and Stability of Variable Order Caputo Type Fractional Differential Equations
Next Article in Special Issue
Electronically Controlled Power-Law Filters Realizations
Previous Article in Journal
Comparative Numerical Study of Spline-Based Numerical Techniques for Time Fractional Cattaneo Equation in the Sense of Caputo–Fabrizio
Previous Article in Special Issue
A Langevin-Type q-Variant System of Nonlinear Fractional Integro-Difference Equations with Nonlocal Boundary Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Approach to Fractional Kinetic Evolutions

by
Vassili N. Kolokoltsov
1,2,3 and
Marianna Troeva
4,*
1
Department of Statistics, University of Warwick, Coventry CV4 7AL, UK
2
Faculty of Applied Mathematics and Control Processes, Saint-Petersburg State University, 198504 Saint Petersburg, Russia
3
Federal Research Center “Computer Science and Control”, Russian Academy of Sciences, 119333 Moscow, Russia
4
Research Institute of Mathematics, North-Eastern Federal University, 58 Belinskogo str., 677000 Yakutsk, Russia
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(2), 49; https://doi.org/10.3390/fractalfract6020049
Submission received: 19 December 2021 / Revised: 7 January 2022 / Accepted: 13 January 2022 / Published: 18 January 2022
(This article belongs to the Special Issue 2021 Feature Papers by Fractal Fract's Editorial Board Members)

Abstract

:
Kinetic equations describe the limiting deterministic evolution of properly scaled systems of interacting particles. A rather universal extension of the classical evolutions, that aims to take into account the effects of memory, suggests the generalization of these evolutions obtained by changing the standard time derivative with a fractional one. In the present paper, extending some previous notes of the authors related to models with a finite state space, we develop systematically the idea of CTRW (continuous time random walk) modelling of the Markovian evolution of interacting particle systems, which leads to a more nontrivial class of fractional kinetic measure-valued evolutions, with the mixed fractional order derivatives varying with the change of the state of the particle system, and with variational derivatives with respect to the measure variable. We rigorously justify the limiting procedure, prove the well-posedness of the new equations, and present a probabilistic formula for their solutions. As the most basic examples we present the fractional versions of the Smoluchovski coagulation and Boltzmann collision models.

1. Introduction

Kinetic equations are measure-valued equations describing the dynamic law of large number (LLN) limit of Markovian interacting particle systems when the number of particles tends to infinity. The resulting nonlinear measure-valued evolutions can be interpreted probabilistically as nonlinear Markov processes, see [1]. In case of discrete state spaces, the set of probability measures coincides with the simplex Σ n of sequences of nonnegative numbers x = ( x 1 , , x n ) such that j x j = 1 , where n can be any natural number, or even n = . The corresponding kinetic equations (in the case of stationary transitions) are ordinary differential equations (ODEs) of the form
x ˙ = x Q ( x ) x ˙ i = k x k Q k i ( x )   for   all   i ,
where Q is a stochastic or Kolmogorov Q-matrix (that is, it has non-negative non-diagonal elements and the elements on each row sum up to one) depending Lipschitz continuously on x. Let X t ( x ) denote the solution of Equation (1) with the initial condition x.
It is seen that evolution (1) is a direct nonlinear extension of the evolution of the probability distributions of a usual Markov chain, which is given by the equation x ˙ = x Q with a constant Q-matrix Q.
Remark 1.
One can show (see [1]) that any ordinary differential equation (with a Lipschitz r.h.s.) preserving the simplex Σ n has form (1) with some function Q ( x ) .
Instead of looking at the evolution of the distributions, one can look alternatively at the evolution of functions F ( x , t ) = F ( X t ( x ) ) of these distributions, which clearly satisfy the equation
F ˙ ( x , t ) = ( x , Q ( x ) F ( x , t ) ) = k , i x k Q k i ( x ) F ( x , t ) x i .
On the language of the theory of differential equations, one says that ODEs (1) are the characteristics of the linear first order partial differential Equation (2).
In case of a usual Markov chain (with a constant Q) it is seen that the set of linear functions F is preserved by Equation (2). In fact if, F ( x ) = ( f , x ) with some vector f R n , then F ( x , t ) = F ( X t ( x ) ) = ( f t , x ) with f t satisfying the equation f ˙ = Q f , which is dual to the equation x ˙ = x Q . For nonlinear case, this conservation of linearity does not hold.
More generally (see [1]), for a system of mean-field interacting particles given by a family of operators A μ , which are generators of Markov processes in some Euclidean space R d depending on probability measures μ on R d as parameters and having a common core, the natural scaling limit of such a system, as the number of particles tends to infinity (dynamic LLN), is described by the kinetic equations, which are most conveniently written in the weak form
( f , μ ˙ t ) = ( A μ t f , μ t ) ,
where f is an arbitrary function from the common core of the operators A μ .
Remark 2.
In Equation (3) we stress explicitly in the notation that μ t depend on time t (to distinguish from time independent test function f), while for the standard ODE (1) we have omitted this dependency for brevity.
The corresponding generalization of Equation (2) is the following differential equation in variational derivatives (see [1]):
F ( μ , t ) t = R d A μ δ F ( μ , t ) δ μ ( . ) ( z ) μ ( d z ) .
The most studied situation is the case with A μ being the diffusion operators, in which case the nonlinear evolution given by (3) is referred to as the nonlinear diffusion or the McKean-Vlasov diffusion. Other important particular cases include nonlinear Lévy processes (when A μ generate Lévy processes), nonlinear stable processes (when A μ generate stable or stable-like processes) and various cases with jump-type process generated by A μ that include the famous Boltzmann and Smoluchovski equations.
All these equations are derived as the natural scaling limits of some random walks on the space of the configurations of many-particle systems.
Standard diffusions are known to be obtained by the natural scaling limits of simple random walks. For instance, in this way one can obtain the simplest diffusion processes with variable drift governed by the diffusion equations
u t = Δ u + ( b ( x ) , u ) , u = u ( t , x ) ,
where Δ and ∇ are operators acting on the x variable.
When the standard random walks are extended to more general CTRWs (continuous time random walks), characterised by the property that the random times between jumps are not exponential, but with tail probabilities decreasing by a power law, their limits turn to non Markovian processes described by the fractional evolutions. The fractional equations were introduced into physics precisely via such limits, see [2]. For instance, instead of Equation (5), the simplest equation that one can obtain via a natural scaling is the equation
D a + α u = Δ u + ( b ( x ) , u ) , u = u ( t , x ) ,
where D a + α is the Caputo-Dzherbashyan fractional derivative of some order α (see [3,4]).
Now the question arises: what will be the natural fractional version of the kinetic evolutions (3) and (4)? One way of extension would be to write the Caputo-Dzherbashyan fractional derivative of some fixed order α instead of the usual one in (3), as was done, e.g., in [5]. However, if one follows systematically the idea of scaling from the CTRW approximation, and take into account the natural possibility of different waiting times for jumps from different states (as for usual Markovian approximation), one would obtain an equation of a more complicated type, with the fractional derivatives of position-dependent order. In [6] this derivation was performed for the case of a discrete state space, that is for a nonlinear Markov chain described by Equation (1), leading to the fractional generalization of Equation (2) of the form
D t ( α , x ) F ( x , s ) = ( x , Q ( x ) F ( x , s ) ) ,
where α = ( α 1 , , α n ) is the vector describing the power laws of the waiting times in various states { 1 , , n } and D t ( x , α ) is the right Caputo-Dzherbashyan fractional derivative of order ( x , α ) depending on the position x and acting on the time variable t.
The aim of the present paper is to derive the corresponding limiting equation in the general case, which, instead of (4) (and generalising (7)), writes down as the equation
D t ( α , μ ) F ( μ , s ) = R d A μ δ F ( μ , s ) δ μ ( . ) ( z ) μ ( d z ) ,
where α is a function on R d and
( α , μ ) = R d α ( x ) μ ( d x ) .
We also supply the well-posedness of this equation and the probabilistic formula for the solutions. We will perform the derivation only in the case of integral operators A μ , that is probabilistically for the underlying Markov processes of pure jump type.
The content of the paper is as follows. In the next section, we recall some basic notations and facts from the theory of measure-valued limits of interacting particle systems. In Section 3 we obtain the dynamic LLN for interacting multi-agent systems for the case of non-exponential waiting times with the power tail distributions. In Section 4 we formulate our main results for the new class of fractional kinetic measure-valued evolutions, with the mixed fractional-order derivatives and with variational derivatives with respect to the measure variable. In Section 5, Section 6 and Section 7, we present proofs of our main results, formulated in the previous section. Namely, we justify rigorously the limiting procedure, prove the well-posedness of the new equations and present a probabilistic formula for their solutions.
In Section 8 we extend the CTRW modeling of interacting particles to the case of binary or even more general k-ary interactions.
In the next two Section 9 and Section 10, we present examples of the kinetic equations for binary interaction: the fractional versions of the Smoluchovski coagulation and Boltzmann collision models
In Appendix A, Appendix B and Appendix C we present auxiliary results we need, namely, the standard functional limit theorem for the random-walk-approximation; some results on time-nonhomogeneous stable-like subordinators; and a standard piece of theory about Dynkin’s martingales in a way tailored to our purposes.
The bold letters E and P will be used to denote expectation and probability.
All general information on fractional calculus that we are using can be found in the books [7,8,9,10].

2. Preliminaries: General Kinetic Equations for Birth-and-Death Processes with Migration

Let us recall some basic notations and facts from the standard measure-valued-limits of interacting processes.
Let X be a locally compact metric space. For simplicity and definiteness, one can take X = R d or X = R + (the set of positive numbers), but this is not important. Denoting by X 0 a one-point space and by X j the powers X × × X (j times), we denote by X their disjoint union X = j = 0 X j . In applications, X specifies the state space of one particle and X = j = 0 X j stands for the state space of a random number of similar particles. We denote by C sym ( X ) the Banach spaces of symmetric (invariant under all permutations of arguments) bounded continuous functions on X and by C sym ( X k ) the corresponding spaces of functions on the finite power X k . The space of symmetric (positive finite Borel) measures is denoted by M sym ( X ) . The elements of M sym ( X ) and C sym ( X ) are interpreted as the (mixed) states and observables for a Markov process on X . We denote the elements of X by bold letters, say x , y .
Reducing the set of observables to C sym ( X ) means effectively that our state space is not X (or X k ) but rather the quotient space S X (or S X k resp.) obtained by factorization with respect to all permutations, which allows the identifications C sym ( X ) = C ( S X ) and C sym ( X k ) = C ( S X k ) .
For a function f on X we shall denote by f the function on X defined as f ( x ) = f ( x 1 ) + + f ( x m ) for any x = ( x 1 , , x m ) .
A key role in the theory of measure-valued limits of interacting particle systems is played by the scaled inclusion S X to M ( X ) given by
x = ( x 1 , , x l ) h ( δ x 1 + + δ x l ) = h δ x ,
which defines a bijection between S X and the set h M δ + ( X ) of finite sums of h-scaled Dirac’s δ -measures, where h is a small positive parameter. If a process under consideration preserves the number of particles N, then one usually chooses h = 1 / N .
Remark 3.
Let us stress that we are using here a non-conventional notation: δ x = δ x 1 + + δ x l , which is convenient for our purposes.
Clearly each f C sym ( X ) is defined by its components (restrictions) f k on X k so that for x = ( x 1 , , x k ) X k X , say, we can write f ( x ) = f ( x 1 , , x k ) = f k ( x 1 , , x k ) . Similar notations for the components of measures from M ( X ) will be used. In particular, the pairing between C sym ( X ) and M ( X ) can be written as
( f , ρ ) = f ( x ) ρ ( d x ) = f 0 ρ 0 + n = 1 f ( x 1 , , x n ) ρ ( d x 1 d x n ) ,
f C sym ( X ) ,   ρ M ( X ) .
A mean-field dependent jump-type process of particle transformations (with a possible change in the number of particles) or a mean-field dependent birth-and-death process with migration, can be specified by a continuous transition kernel
P ( μ , x , d y ) = { P ( μ , x , d y 1 d y m ) ,   m = 0 , , m ¯ }
from X to S X depending on a measure μ M ( X ) as a parameter.
Remark 4.
For brevity we write (10) in a unified way including m = 0 . More precisely, the transitions with m = 0 describe the death of particles and are specified by some rates P ( μ , x ) .
We restrict attention to finite m ¯ in order not to bother with some irrelevant technicalities arising otherwise.
To exclude fictitious jumps one usually assumes that P ( μ , x , { x } ) = 0 for all x, which we shall do as well.
By the intensity of the interaction at x we mean the total mass
P ( μ , x ) = X P ( μ , x , d y ) = m = 0 m ¯ X m P m ( μ , x , d y 1 d y m ) .
Supposing that any particle, randomly chosen from a given set of n particles, can be transformed according to P, leads to the following generator of the process on X
( G f ) ( x 1 , , x n ) = i ( f ( x 1 , , x i 1 , y , x i + 1 , , x n ) f ( x 1 , , x n ) ) P ( h δ x , x i , d y )
= m = 0 m ¯ i ( f ( x 1 , , x i 1 , y 1 , , y m , x i + 1 , , x n ) f ( x 1 , , x n ) ) P m ( h δ x , x i , d y 1 d y m ) .
By the standard probabilistic interpretation of jump processes (see, e.g., [1,11]), the probabilistic description of the evolution of a pure jump Markov process on X specified by the generator G (if this process is well defined) is as follows. Starting from a state x = ( x 1 , , x n ) , one attaches to each x i a random P ( h δ x , x i ) -exponential waiting time σ i (exponential clock). That is, P ( σ i > t ) = exp { P ( h δ x , x i ) t } for all t > 0 . Then the minimum σ of all these times is again an exponential random time, namely A h ( x ) -exponential waiting time with
A h ( x ) = P ( h δ x , x ) = i P ( h δ x , x i ) = 1 h X P ( h δ x , y ) ( h δ x ) ( y ) .
When σ rings, a particle at x i that makes a transition, is chosen according to the probability law P ( h δ x , x i ) / A h ( x ) , and then it makes an instantaneous transition to y according to the distribution P ( h δ x , x i , d y ) / P ( h δ x , x i ) . Then this procedure repeats, starting from the new state ( x \ { x i } ) y .
By the transformation (9), we transfer the process generated by G on S X to the process on h M δ + ( X ) with the generator
G h F ( h δ x ) = i X [ F ( h δ x h δ x i + h δ y ) F ( h δ x ) ] P ( h δ x , x i , d y )
= X X 1 h [ F ( h δ x h δ z + h δ y ) F ( h δ x ) ] P ( h δ x , z , d y ) h δ x ( d z ) .
Then it is seen that, as h 0 (and for smooth F), these generators converge to the operator
G l i m F ( μ ) = X X δ F ( μ ) δ μ ( . ) ( y ) δ F ( μ ) δ μ ( x ) P ( μ , x , d y ) μ ( d x ) .
This makes it plausible to conclude (for detail see [1]) that, as h 0 (and under mild technical assumptions), the process generated by (12) converges weakly to a deterministic process on measures generated by (13), so that this process is given by the solution of a kinetic equation of type (3), that is,
( f , μ ˙ t ) = X X ( f ( y ) f ( x ) ) μ t ( d x ) P ( μ t , x , d y ) .
Denoting by M μ ( t ) the solution to Equation (14) with the initial condition μ we can rewrite evolution (14) equivalently in terms of the functions F ( μ , t ) = F ( M μ ( t ) ) as the equation of type (4):
F ( μ , t ) t = X X δ F ( μ , t ) δ μ ( . ) ( y ) δ F ( μ , t ) δ μ ( x ) P ( μ , x , d y ) μ ( d x ) .
Notice that (14) is obtained from (15) by choosing F to be a linear function F ( μ ) = ( f , μ ) .
Alternatively, and more relevant for the extension to CTRW, we can obtain the same limit from a discrete Markov chain. Namely, let us define a Markov chain on X with the transition operator U τ such that, in a state x = ( x 1 , , x n ) , a jump from x i occurs with the probability τ P ( h δ x , x i ) / A h ( x ) and it is distributed according to the distribution P ( h δ x , x i , d y ) / P ( h δ x , x i ) , and with the probability 1 τ A h ( x ) the process remains in x . In terms of the measures h δ x the jumps are described by the transitions from h δ x to h δ x h δ x i + h δ y .
We set h = τ to link the scaling in time with the scaling in the number of particles. Let us see what happens in the limit h = τ 0 . Namely, we are interested in the weak limit of the chains with transitions [ U τ ] [ t / τ ] , where [ t / τ ] denotes the integer part of the number t / τ , as τ 0 . It is well known (see, e.g., Theorem 19.28 of [11] or Theorem 8.1.1 of [12]) that if such a chain converges to a Feller process, then the generator of this limiting process can be obtained as the limit
Λ F = lim τ 0 1 τ ( U τ F F ) .
One sees directly that this limit coincides with (13).
In the simplest case when the number of particles is preserved by all transformations, that is when only m = 1 is allowed in (10) (that is, only migration can occur), the Equations (14) and (15) simplify to the equations
( f , μ ˙ t ) = X 2 ( f ( y ) f ( x ) ) μ t ( d x ) P ( μ t , x , d y ) ,
and, respectively
F ( μ , t ) t = X 2 δ F ( μ , t ) δ μ ( y ) δ F ( μ , t ) δ μ ( x ) P ( μ , x , d y ) μ ( d x ) .

3. CTRW Modeling of Interacting Particle Systems

Our objective is to obtain the dynamic LLN for interacting multi-agent systems for the case of non-exponential waiting times with the power tail distributions. As one can expect, this LLN will not be deterministic anymore.
We shall assume that the waiting times between jumps are not exponential, but have a power-law decay. Recall that a positive random variable σ with a probability law P on [ 0 , ) is said to have a power tail of index  α if
P ( σ > t ) ϰ t α
for large t, that is the ratio of the l.h.s. and the r.h.s tends to 1, as t . Here ϰ > 0 is a positive constant.
As the exponential tails, the power tails are invariant under taking minima. Namely, if σ j , j = 1 , , d , are independent variables with a power tail of indices α i and normalising constants ϰ i , then σ = min ( σ 1 , , σ d ) is clearly a variable with a power tail of index α = α 1 + + α d and normalising constant ϰ 1 ϰ d .
In full analogy with the case of exponential times of the discussion above, let us assume that the waiting time of the agent at x i to decay has the power tail with the index α ( x i ) = α τ P ( x i ) with some fixed α ( 0 , 1 ) . For simplicity, assume that the normalising constant ϰ equals 1. Consequently, the minimal waiting time of all n points in a collection x = ( x 1 , , x n ) will have the probability law Q x ( d r ) with a tail of the index α τ A τ ( x ) , with A τ given by (11):
A τ ( x ) = P ( τ δ x , x ) = i P ( τ δ x , x i ) = 1 τ X P ( τ δ x , y ) ( τ δ x ) ( y ) .
Our process with power tail waiting times can thus be described probabilistically as follows. Starting from any time and current state x , we wait a random waiting time σ , which has a power tail with the index α τ A τ ( x ) . Then everything goes as in the above case of exponential waiting times. Namely, when σ rings, a particle at x i that makes a transition is chosen according to the probability law P ( τ δ x , x i ) / A τ ( x ) , and then it makes an instantaneous transition to y according to the distribution P ( τ δ x , x i , d y ) / P ( τ δ x , x i ) . Then this procedure repeats, starting from the new state x \ { x i } y .
In order to derive the LLN in this case, let us lift this non-Markovian evolution on the space of subsets x = ( x 1 , , x n ) or the corresponding measures h δ x to the discrete time Markov chain on h M δ + ( X ) × R + by considering the total waiting time s as an additional space variable and additionally making the usual scaling (by τ 1 / α τ A τ ( x ) ) of the waiting time for the jumps of CTRW (see Proposition A1 from Appendix A). Thus we consider the Markov chain ( M μ , s τ , S μ , s τ ) ( k τ ) on h M δ + ( X ) × R + with the jumps occurring at discrete times k τ , k N , such that the process at a state ( x , s ) at time τ k jumps to ( x \ { x i } y , s + τ 1 / α τ A τ ( x ) r ) , or equivalently a state ( h δ x , s ) jumps to the state ( h ( δ x δ x i + δ y ) , s + τ 1 / α τ A τ ( x ) r ) , where x i and y are chosen as above (that is, x i according to the law P ( τ δ x , x i ) / A τ ( x ) and y according to the law P ( τ δ x , x i , d y ) / P ( τ δ x , x i ) ) and r is distributed by Q x ( d r ) .
As above, we link the scaling of measures with the scaling of time by choosing h = τ , which we set from now on. Then the transition operator of the chain ( M μ , s τ , S μ , s τ ) ( k τ ) is given by
U τ F ( μ , s ) = R + X Q x ( d r ) i P ( τ δ x , x i , d y ) 1 A τ ( x ) F μ τ δ x i + τ δ y , s + τ 1 / α τ A τ ( x ) r ,
for μ = τ δ x = τ j δ x j .
What we are interested in is the value of the first coordinate M μ , s τ evaluated at the random time k τ such that the total waiting time S x , s τ ( k τ ) reaches t, that is, at the time
k τ = T μ , s τ ( t ) = inf { m τ : S μ , s τ ( m τ ) t } ,
so that T μ , s τ is the inverse process to S μ , s τ . Thus the scaled mean-field interacting system of particles with a power tail waiting time between jumps is the (non-Markovian) process
M ˜ μ , s τ ( t ) = M μ , s τ ( T μ , s τ ( t ) ) .
This process can also be called the scaled CTRW of mean-field interacting particles (with birth-and-death and migration).

4. Main Results

Let us see first of all what happens with the process ( M μ , s τ , S μ , s τ ) ( k τ ) in the limit h = τ 0 . Namely, we are interested in the weak limit of the chains with transitions [ U τ ] [ t / τ ] , where [ t / τ ] denotes the integer part of the number t / τ , as τ 0 .
As above, if such a chain converges to a Feller process, then the generator of this limiting process can be obtained as the limit
Λ F = lim τ 0 1 τ ( U τ F F ) .
Lemma 1.
Assume that F ( μ , t ) has a bounded derivative in t and a bounded variational derivative in μ. If μ = τ δ x converges weakly to some measure, which we shall also denote by μ with some abuse of notation, as τ 0 , then
Λ F ( μ , s ) = lim τ 0 1 τ ( U τ F F ) ( μ , s ) = α ( P , μ ) 0 F ( μ , s + r ) F ( μ , s ) r 1 + α ( P , μ ) d r
+ X X P ( μ , z , d y ) ( P , μ ) μ ( d z ) δ F δ μ ( . ) ( y ) δ F δ μ ( z ) ,
where ( P , μ ) denotes the usual paring of the function P with the measure μ (but taking into account the additional dependence of P on μ):
( P , μ ) = X P ( μ , x ) μ ( d x ) .
Proposition 1.
Suppose P ( μ , x , d y ) is a weakly continuous transition kernel from M ( X ) × X to X , that is, the measures P ( μ , x , d y ) depend weakly continuous on μ , x . Moreover let the intensity P ( μ , x ) be everywhere strictly positive and uniformly bounded. Finally, the transition kernel is assumed to be subcritical meaning that
m = 0 m ¯ ( m 1 ) X m P ( μ , x , d y 1 d y m ) 0 .
Then the Markov process ( M μ , s ( t ) , S μ , s ( t ) ) in M ( X ) × R + such that the first coordinate does not depend on s (it could be well denoted shortly M μ ( t ) ), is deterministic, and solves the kinetic equation of type (14), namely the equation
( f , μ ˙ t ) = 1 ( P , μ ) X X ( f ( y ) f ( x ) ) μ t ( d x ) P ( μ t , x , d y ) ,
and the second coordinate is a time nonhomogeneous stable-like subordinator generated by the time dependent family of generators
Λ s t t g ( s ) = α ( P , μ ( t ) ) 0 g ( s + r ) g ( s ) r 1 + α ( P , μ ( t ) )   d r
(see Appendix B for the proper definition of this process), is well defined and is generated by operator (22).
Moreover, the discrete time Markov chains ( M μ , s τ , S μ , s τ ) ( k τ ) given by (19) converge weakly to the Markov process ( M μ , s ( t ) , S μ , s ( t ) ) , so that, in particular, for any continuous bounded function F,
lim τ 0 ,   k τ t E F ( M μ , s τ , S μ , s τ ) ( k τ ) = E F ( M μ , s , S μ , s ) ( t ) .
Remark 5.
The assumption of boundedness of the intensity P ( x ) is made for simplicity and can be weakened essentially. Effectively one needs here the well-posedness of the kinetic Equation (24), which is established under rather general assumptions, see [1].
Finally we can formulate our main result.
Theorem 1.
Under the assumptions of Proposition 1, the marginal distributions of the scaled CTRW of mean-field interacting particles (20) converge to the marginal distributions to the process
M ˜ μ , s ( t ) = M μ , s ( T μ , s ( t ) ) ,
where T μ , s ( t ) is the random time when the stable-like process generated by (25) and started at s reaches the time t,
T μ , s ( t ) = inf { r : S μ , s ( r ) t } ,
that is, for a bounded continuous function F ( μ ) , it holds that
lim τ 0 E F ( M ˜ μ , s τ ( t ) ) = E F ( M ˜ μ , s ( t ) ) .
Moreover, for any smooth function F ( μ ) (with continuous bounded variational derivative), the evolution of averages F ( μ , s ) = E F ( M ˜ μ , s ( t ) ) satisfies the mixed fractional differential equation
D t α ( P , μ ) F ( μ , s ) = X X P ( z , d y ) ( P , μ ) μ ( d z ) δ F δ μ ( . ) ( y ) δ F δ μ ( z ) ,   s [ 0 , t ]
with the terminal condition F ( μ , t ) = F ( μ ) , where the right fractional derivative acting on the variable s t of F ( μ , s ) is defined as
D t α ( P , μ ) g ( s ) = α ( P , μ ) 0 t s g ( s + y ) g ( s ) y 1 + α ( P , μ ) d y + α ( P , μ ) ( g ( t ) g ( s ) ) t s d y y 1 + α ( P , μ ) .
It is not difficult to extend this statement to the functional level, namely by deriving the convergence in distribution of the scaled CTRW (20) to the process (27), but we shall not plunge into related technical details here.

5. Proof of Lemma 1

We have
1 τ ( U τ F F ) ( μ , s )
= 1 τ R + X Q x ( d r ) i P ( x i , d y ) 1 A ( x ) F μ τ δ x i + τ δ y , s + τ 1 / α τ A ( x ) r F ( μ , s )
= 1 τ X Q x ( d r ) F μ , s + τ 1 / α τ A ( x ) r F ( μ , s )
+ 1 τ X i P ( x i , d y ) 1 A ( x ) F μ τ δ x i + τ δ y , s F ( μ , s ) + R ,
where the error term is
R = 1 τ R + X Q x ( d r ) i P ( x i , d y ) 1 A ( x ) g i , y ( μ , s + τ 1 / α τ A ( x ) r ) g i , y ( μ , s ) ,
with
g i , y ( μ , s ) = F μ τ δ x i + τ δ y , s F ( μ , s ) .
Assuming that τ δ x converges to some measure, which we also denote by μ , as τ 0 , we can conclude by (A3) that the first term in (31) converges, as τ 0 , to
α ( P , μ ) 0 F ( μ , s + r ) F ( μ , s ) r 1 + α ( P , μ ) d r ,
whenever F is continuously differentiable in s. By the definition of the variational derivative, the second term in (31) converges, as τ 0 , to
X X P ( z , d y ) ( P , μ ) μ ( d z ) δ F δ μ ( . ) ( y ) δ F δ μ ( z ) .
To estimate the term R we note that if F has a bounded derivative in t and a bounded variational derivative in μ , then g ( x ) is uniformly bounded by τ and the derivative | g / s | is uniformly bounded. Hence by (A3) it follows that R 0 , as τ 0 implying (22).

6. Proof of Proposition 1

By Theorem 6.1 of [1], under the assumptions of the Proposition, the kinetic Equation (24) is well posed in M ( X ) . Consequently, in view of the discussion of stable-like subordinators from Appendix B, the process ( M μ , s ( t ) , S μ , s ( t ) ) in M ( X ) × R + , as described in Proposition 1, is well defined.
Let us show that, for smooth functions F ( μ , s ) such that F ( μ , 0 ) = 0 , the generator of this process is indeed given by formula (22). To simplify formulas we shall sometimes consider F ( μ , s ) to be defined for all s so that F ( μ , s ) = 0 for all s 0 . By (A10), the transition operators T t F ( μ , s ) = E F ( M μ , s , S μ , s ) ( t ) of this process are given by the formula
T t F ( μ , s ) = 0 F ( M μ , s ( t ) , S ) G t , 0 ( s S ) d S ,
where, by (A11) and (25),
G t , 0 ( x ) = 1 2 π R e i p x d p
× exp { t r α ( P , μ ( τ ) ) Γ ( α ( P , μ ( τ ) ) ) | p | α ( P , μ ( τ ) ) exp { i π α ( P , μ ( τ ) )   sgn ( p ) / 2 } d τ } ,
where for brevity we wrote μ ( τ ) for M μ , s ( τ ) .
We need to show that
d d t t = 0 T t F ( μ , s ) = Λ F ( μ , s ) ,
with Λ given by (22). Differentiating (32), we find that
d d t T t F ( μ , s ) = 0 F ( M μ , s ( t ) , S ) d d t G t , 0 ( s S ) d S
+ 0 δ F ( M μ , s ( t ) , S ) δ μ , μ ˙ G t , 0 ( s S ) d S .
We see that the second term turns to the second term of (22) at t = 0 . Noting that G t , 0 ( x ) = 0 for x 0 and using (A13), we get for the first term that
0 F ( M μ , s ( t ) , S ) d d t G t , 0 ( s S ) d S = F ( M μ , s ( t ) , S ) Λ s t t G t , 0 ( s S ) d S
with Λ s t t given by (25), which by changing variables rewrites as
Λ s t t F ( M μ , s ( t ) , S ) G t , 0 ( s S ) d S ,
and which in turn for t = 0 becomes equal to Λ s t 0 F ( μ , s ) , that is, to the first term of (22). Thus smooth functions F ( μ , s ) vanishing at s = 0 and s = do belong to the domain of the generator of the process ( M μ , s ( t ) , S μ , s ( t ) ) .
Differentiating Formula (32) with respect to μ and s we can conclude that smooth functions are invariant under the semigroup of this process. In fact, differentiability with respect to s follows from the explicit Formula (33) for G t , 0 (and bounds (A12)), and differentiability with respect to μ follows, on one hand side, from the explicit formula for G t , 0 , and, on the other hand, from the smooth dependence of the solutions to kinetic equations M μ , s ( t ) on initial data μ . This smooth dependence is a known fact from the theory of kinetic equations (Theorem 8.2 of [1]), which has in fact a rather straightforward proof under the assumption of bounded intensities. Consequently, smooth functions F ( μ , s ) , vanishing at s = 0 and s = , form an invariant core for the semigroup of the process ( M μ , s ( t ) , S μ , s ( t ) ) .
Consequently, from Lemma 1 and the general result on the convergence of semigroups (see e.g., Theorem 19.28 of [11]) we can conclude that the Markov chains ( M μ , s τ , S μ , s τ ) ( k τ ) converge weakly to the Markov process ( M μ , s ( t ) , S μ , s ( t ) ) thus completing the proof of the Proposition.

7. Proof of Theorem 1

By the density arguments, to prove (28), it is sufficient to show that
E F ( M ˜ μ , s τ ( t ) ) E F ( M ˜ μ , s ( t ) ) = E F ( M μ , s τ ( T μ , s τ ( t ) ) ) E F ( M μ , s ( T μ , s ( t ) ) ) 0 ,
as τ 0 , for smooth functions F (that have bounded continuous variational derivatives).
We have
| E F ( M μ , s τ ( T μ , s τ ( t ) ) ) E F ( M μ , s ( T μ , s ( t ) ) ) | I + I I ,
with
I = | E F ( M μ , s τ ( T μ , s τ ( t ) ) ) E F ( M μ , s ( T μ , s τ ( t ) ) ) |
I I = | E F ( M μ , s ( T μ , s τ ( t ) ) ) E F ( M μ , s ( T μ , s ( t ) ) ) | .
To estimate I we write
I = 0 [ E F ( M μ , s τ ( r ) ) F ( M μ , s ( r ) ) ] ξ t ( d r )
= 0 K [ E F ( M μ , s τ ( r ) ) F ( M μ , s ( r ) ) ] ξ t ( d r ) + K [ E F ( M μ , s τ ( r ) ) F ( M μ , s ( r ) ) ] ξ t ( d r ) ,
where ξ t (depending in τ , μ , s ) is the distribution of T μ , s τ ( t ) . Choosing K large enough we can make the second integral arbitrary small uniformly in τ . And then, by Proposition 1, we can make the first integral arbitrary small by choosing small enough τ (and uniformly in t from compact sets).
It remains II. Integrating by parts we get the following:
I I = 0 F ( M μ , s ( r ) ) d P ( T μ , s τ ( t ) r ) 0 F ( M μ , s ( r ) ) d P ( T μ , s ( t ) r )
= 0 r ( F ( M μ , s ( r ) ) ) P ( T μ , s τ ( t ) r )   d r 0 r ( F ( M μ , s ( r ) ) ) P ( T μ , s ( t ) r )   d r ,
and therefore
I I = 0 r ( F ( M μ , s ( r ) ) ) P ( S μ , s τ ( r ) > t )   d r 0 r ( F ( M μ , s ( r ) ) ) P ( S μ , s ( r ) > t )   d r .
By Proposition 1 (and because the distribution of the random variable S μ , s is absolutely continuous), P ( S μ , s τ ( r ) > t ) P ( S μ , s ( r ) > t ) as τ 0 . Therefore I I 0 by the dominated convergence, as τ 0 .
It remains to show that F satisfies Equation (29). However, this follows from the general arguments, see Formulas (A14) and (A15) from Appendix C, because Equation (29) is a particular case of equation L ˜ f = 0 with L ˜ from (A14).

8. Extension to Binary and k -ary Interaction

Here we extend the CTRW modeling of interacting particles to the case of binary or even more general k-ary interactions stressing main new points and omitting details. Firstly we recall the basic scheme of general Markovian binary interaction from [1].
A mean-field dependent jump-type process of binary interaction of particles (with a possible decrease in the number of particles) can be specified by a continuous transition kernel
P ( μ , x 1 , x 2 , d y ) = { P ( μ , x 1 , x 2 , d y 1 d y m ) ,   m = 0 , 1 , 2 }
from S X 2 to S X depending on a measure μ M ( X ) as a parameter, with the intensity
P ( μ , x 1 , x 2 ) = X P ( μ , x 1 , x 2 , d y ) = m = 0 2 X m P m ( μ , x 1 , x 2 , d y 1 d y m ) .
We again assume that P ( μ , x 1 , x 2 , { x 1 , x 2 } ) = 0 always.
Remark 6.
The possibility that more than two particles can result after the decay of two particles creates some technical difficulties for the analysis of the kinetic equation that we choose to avoid here.
For a finite subset I = { i 1 , , i k } of a finite set J = { 1 , , n } , we denote by | I | the number of elements in I, by I ¯ its complement J \ I and by x I the collection of variables x i 1 , , x i k .
The corresponding scaled Markov process on X of binary interaction is defined via its generator
( G h 2 f ) ( x 1 , , x n ) = h I { 1 , , . n } , | I | = 2 ( f ( x I ¯ , y ) f ( x 1 , , x n ) ) P ( h δ x , x I , d y )
= h m = 0 2 I { 1 , , . n } , | I | = 2 ( f ( x I ¯ , y 1 , , y m ) f ( x 1 , , x n ) ) P m ( h δ x , x I ; d y 1 d y m )
(note the additional multiplier h, as compared with (12), needed for the proper scaling of binary interactions). In terms of the measures from h M δ + ( X ) the process can be equivalently described by the generator
G h 2 F ( h δ x ) = h I { 1 , , n } , | I | = 2 X [ F ( h δ x h δ x I + h δ y ) F ( h δ x ) ] P ( h δ x , x I ; d y ) ,
which acts on the space of continuous functions F on h M δ + ( X ) .
Applying the obvious equation
I { 1 , , n } , | I | = 2 f ( x I ) = 1 2 f ( z 1 , z 2 ) δ x ( d z 1 ) δ x ( d z 2 ) 1 2 f ( z , z ) δ x ( d z ) ,
which holds for any f C sym ( X 2 ) and x = ( x 1 , , x n ) X n , one observes that the operator G h 2 can be written in the form
G h 2 F ( h δ x ) = 1 2 X X [ F ( h δ x 2 h δ z + h δ y ) F ( h δ x ) ] P ( h δ x , z , z , d y ) ( h δ x ) ( d z )
+ 1 2 h X X 2 [ F ( h δ x h δ z 1 h δ z 2 + h δ y ) F ( h δ x ) ] P ( h δ x , z 1 , z 2 , d y ) ( h δ x ) ( d z 1 ) ( h δ x ) ( d z 2 ) .
On the linear functions
F g ( μ ) = g ( y ) μ ( d y ) = ( g , μ )
this operator acts as
Λ 2 h F g ( h δ x ) = 1 2 X X 2 [ g ( y ) g ( z 1 , z 2 ) ] P ( h δ x , z 1 , z 2 , d y ) ( h δ x ) ( d z 1 ) ( h δ x ) ( d z 2 )
1 2 h X X [ g ( y ) g ( z , z ) ] P ( h δ x , z , z , d y ) ( h δ x ) ( d z ) .
It follows that if h 0 and h δ x tends to some finite measure μ (in other words, that the number of particles tends to infinity, but the “whole mass” remains finite due to the scaling of each atom), the corresponding evolution equation F ˙ = G h 2 F on linear functionals F = F g tends to the equation
d d t ( g , μ t ) = Λ 2 F g ( μ t )
= 1 2 X X 2 ( g ( y ) g ( z ) ) P ( μ t , z , d y ) μ t ( d z 1 ) μ t ( d z 2 ) ,   z = ( z 1 , z 2 ) ,
which is the general kinetic equation for binary interactions of pure jump type in weak form.
For a nonlinear smooth function F ( μ ) the time evolving function F ( μ , t ) = F ( μ t ( μ ) ) satisfies the equation in variational derivatives
F ( μ , t ) t = X 2 X P ( μ , x 1 , x 2 , d y ) μ ( d x 1 ) μ ( d x 2 )
× δ F ( μ , t ) δ μ ( . ) ( y ) δ F ( μ , t ) δ μ ( . ) ( x 1 , x 2 ) .
As in the case of mean-field interaction, the evolution (39) can be obtained as the limit of discrete Markov chains with waiting times depending on the current position of the particle system.
Let us see what comes out of the CTRW modelling.
To this end we attach a random waiting time σ i j to each pair ( x i , x j ) of particles, assuming that σ i j has the power tail with the index α τ P ( τ δ x , x i , x j ) with some fixed α ( 0 , 1 ) . Consequently, the minimal waiting time σ of all pairs among a collection x = ( x 1 , , x n ) will have the probability law Q x ( d r ) with a tail of the index α τ A τ ( x ) , with
A τ ( x ) = I : | I | = 2 P ( τ δ x , x I )
= 1 2 τ 2 X 2 P ( τ δ x , z 1 , z 2 ) ( τ δ x ) ( z 1 ) ( τ δ x ) ( z 1 ) 1 2 τ X P ( τ δ x , z , z ) ( τ δ x ) ( z ) ,
where (36) was used.
In full analogy with the case of a mean-field interaction, let us define the discrete-time Markov chain on h M δ + ( X ) × R + by considering the total waiting time s as an additional space variable and additionally making the usual scaling of the waiting times. Namely, we consider the Markov chain ( M μ , s τ , S μ , s τ ) ( k τ ) on h M δ + ( X ) × R + with the jumps occurring at discrete times k τ , k N , such that the process at a state ( x , s ) at time τ k jumps to ( x \ { x i , x j } y , s + τ 1 / α τ A τ ( x ) r ) , or equivalently a state ( h δ x , s ) jumps to the state ( h ( δ x δ x i δ x j + δ y ) , s + τ 1 / α τ A τ ( x ) r ) , where x i , x j are chosen according to the law P ( τ δ x , x i , x j ) / A τ ( x ) and y according to the law P ( τ δ x , x i , x j , d y ) / P ( τ δ x , x i , x j ) ), and r is distributed by Q x ( d r ) .
Again choosing h = τ , the transition operator of the chain ( M μ , s τ , S μ , s τ ) ( k τ ) is given by
U τ F ( μ , s ) = R + X Q x ( d r )
× I : | I | = 2 P ( τ δ x , x I , d y ) 1 A τ ( x ) F μ τ δ x I + τ δ y , s + τ 1 / α A τ ( x ) r ,
for μ = τ δ x = τ j δ x j .
We are again interested in the value of the first coordinate M μ , s τ evaluated at the random time k τ such that the total waiting time S x , s τ ( k τ ) reaches t, that is, at the time
k τ = T μ , s τ ( t ) = inf { m τ : S μ , s τ ( m τ ) t } ,
so that T μ , s τ is the inverse process to S μ , s τ . Thus the scaled mean-field and binary interacting system of particles with a power tail waiting time between jumps is the (non-Markovian) process
M ˜ μ , s τ ( t ) = M μ , s τ ( T μ , s τ ( t ) ) .
This process can also be called the scaled CTRW of mean-field and binary interacting particles.
Analogously to Lemma 1 one shows that
Λ F ( μ , s ) = lim τ 0 1 τ ( U τ F F ) ( μ , s ) = α ( P , μ ) 0 F ( μ , s + r ) F ( μ , s ) r 1 + α ( P , μ ) d r
+ X X P ( μ , z 1 , z 2 , d y ) ( P , μ ) μ ( d z 1 ) μ ( d z 2 ) δ F δ μ ( . ) ( y ) δ F δ μ ( . ) ( z 1 , z 2 ) ,
where
( P , μ ) = X 2 P ( μ , x 1 , x 2 ) μ ( d x 1 ) μ ( d x 2 ) .
Then, analogously to Proposition 1 one establishes the following result.
Proposition 2.
Let the kernel P from M ( X ) × S X 2 to X be strictly positive, uniformly bounded and weakly continuous. Then operator (43) generates a Markov process ( M μ , s ( t ) , S μ , s ( t ) ) in M ( X ) × R + such that the first coordinate does not depend on s, is deterministic, and solves the kinetic Equation (38). The second coordinate is a stable-like subordinator generated by the time dependent family of operators
Λ s t g ( t ) = α ( P , μ ( t ) ) 0 g ( s + r ) g ( s ) r 1 + α ( P , μ ( t ) )   d r .
Moreover, the Markov chain (in discrete time) ( M μ , s τ , S μ , s τ ) ( k τ ) given by (19) converges weakly to the Markov process ( M μ , s ( t ) , S μ , s ( t ) ) .
Finally one proves the analog of Theorem 1, that is, that process (42) converges weakly to the process M ˜ μ , s ( t ) = M μ , s ( T μ , s ( t ) ) , whose averages F ( μ , s ) = F ( M ˜ μ , s ( t ) ) satisfy the fractional equation
D t α ( P , μ ) F ( μ , s ) = X X 2 P ( μ , z 1 , z 2 , d y ) ( P , μ ) μ ( d z 1 ) μ ( d z 2 ) δ F δ μ ( . ) ( y ) δ F δ μ ( . ) ( z 1 , z 2 ) ,
for s [ 0 , t ] , where
D t α ( P , μ ) g ( s ) = α ( P , μ ) 0 t s g ( s + y ) g ( s ) y 1 + α ( P , μ ) d y + α ( P , μ ) ( g ( t ) g ( s ) ) t s d y y 1 + α ( P , μ ) .
Similarly, the kth order (or k-ary) interactions of jump-type are given by the
P ( μ , x 1 , , x k , d y ) = { P ( μ , x 1 , , x k , d y 1 d y m ) ,   m = 0 , , k }
from X k to S X . The corresponding limit of the scaled CTRW is governed by the following equation that extends (45) from k = 2 to an arbitrary k:
D t α ( P , μ ) F ( μ , s ) = X X k P ( μ , z 1 , , z k , d y ) ( P , μ ) μ ( d z 1 ) μ ( d z k )
× δ F δ μ ( . ) ( y ) δ F δ μ ( . ) ( z 1 , , z k ) .

9. Example: Fractional Smoluchovski Coagulation Evolution

One of the famous examples of the kinetic equations for binary interaction (38) represents the Smoluchovski equation for the process of mass preserving binary coagulation. In its slightly generalized standard weak form it writes down (see, e.g., [13]) as
d d t X g ( z ) μ t ( d z ) = 1 2 X 3 [ g ( y ) g ( z 1 ) g ( z 2 ) ] K ( z 1 , z 2 ; d y ) μ t ( d z 1 ) μ t ( d z 2 ) .
Here X is a locally compact set, E : X R + is a continuous function (generalized mass) and the (coagulation) transition kernel P ( z 1 , z 2 , d y ) = K ( z 1 , z 2 , d y ) is such that the measures K ( z 1 , z 2 ; . ) are supported on the set { y : E ( y ) = E ( z 1 ) + E ( z 2 ) } (preservation of mass).
The corresponding fractional evolution (45) takes the form
D t α ( K , μ ) F ( μ , s ) = X 3 K ( z 1 , z 2 , d y ) ( K , μ ) μ ( d z 1 ) μ ( d z 2 ) δ F δ μ ( y 1 ) + δ F δ μ ( y 2 ) δ F δ μ ( z 1 ) δ F δ μ ( z 2 ) ,
with
( K , μ ) = X 3 K ( z 1 , z 2 , d y ) μ ( d z 1 ) μ ( d z 2 ) .

10. Example: Fractional Boltzmann Collisions Evolution

The classical spatially trivial Boltzmann equation in R d writes down as
d d t ( g , μ t ) = 1 4 S d 1 R 2 d [ g ( w 1 ) + g ( w 2 ) g ( v 1 ) g ( v 2 ) ] B ( | v 1 v 2 | , θ ) d n μ t ( d v 1 ) μ t ( d v 2 ) ,
where v j , w j R d , S d 1 is a unit sphere in R d with d n the Lebesgue measure on it,
w 1 = v 1 n ( v 1 v 2 , n ) , w 2 = v 2 + n ( v 1 v 2 , n ) ) ,   n S d 1 ,
θ is the angle between v 2 v 1 and n, and B is a collision kernel, which is a continuous function on R + × [ 0 , π ] satisfying the symmetry condition B ( | v | , θ ) = B ( | v | , π θ ) .
The corresponding fractional evolution (45) takes the form
D t α ( B , μ ) F ( μ , s ) = 1 4 S d 1 R 2 d B ( | v 1 v 2 | , θ ) d n μ t ( d v 1 ) μ t ( d v 2 ) ( B , μ )
× δ F δ μ ( y 1 ) + δ F δ μ ( y 2 ) δ F δ μ ( z 1 ) δ F δ μ ( z 2 ) ,
with
( B , μ ) = R 2 d S d 1 B ( | v 1 v 2 | , θ )   d n μ ( d v 1 ) μ ( d v 2 ) .

Author Contributions

Conceptualization, V.N.K.; methodology, V.N.K.; formal analysis, V.N.K. and M.T.; investigation, V.N.K. and M.T.; writing—original draft preparation, V.N.K. and M.T.; writing—review and editing, V.N.K. and M.T. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by the Ministry of Science and Higher Education of the Russian Federation: agreement No. 075-02-2021-1396, 31 May 2021.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. CTRW Approximation of Stable Processes

As an auxiliary result, we need the standard functional limit theorem for the random-walk-approximation of stable laws, see, e.g., [12,14,15] and references therein for various proofs.
Let a positive random variables T with distribution Q T ( d r ) belong to the domain of attraction of a β -stable law, β ( 0 , 1 ) , in the sense that
P ( T > m ) 1 β m β
(the sign ∼ means here that the ratio tends to 1, as m ). Let T i be a sequence of i.i.d. random variables distributed like T and
Φ t h = i = 1 [ t / h ] h 1 / α T i
be a scaled random walk based on T i , h > 0 , so that Φ t h can be presented as a scaled Markov chain Φ t h = U h [ t / h ] , where [ t / h ] is the integer part of t / h and U h 1 = U h is a Markov transition operator:
U h f ( x ) = R + f ( x + h 1 / α r ) Q T ( d r ) .
Proposition A1.
Let S t be a β-stable Lévy subordinator, that is a Lévy process in R + generated by the stable generator
L β ( x ) = f ( x + y ) f ( x ) y 1 + β d y
(which up to a multiplier represents the fractional derivative d β / d ( x ) β ). Then
lim h 0 U h f ( x ) f ( x ) h = L β f ( x )
for all smooth functions and Φ t h = U h [ t / h ] S t in distribution, as h 0 .

Appendix B. Time-Nonhomogeneous Stable-like Processes

Stable Lévy subordinators are spatially and temporary homogeneous Markov processes generated by operators (A2). In our story, the key role belongs to the time nonhomogeneous extension of these processes, which we shall refer to as time nonhomogeneous stable-like subordinators. These are the spatially homogenous Markov processes X s , x t on R + generated by the family of operators of the type
L t ( x ) = ω t f ( x + y ) f ( x ) y 1 + β t d y
with some continuous functions ω t , β t on R + such that
0 < min β t max β t < 1 ,   0 < min ω t max ω t <
Remark A1.
More studied in the literature are the time homogeneous, but spatially nonhomogeneous processes, generated by operators, which are similar to (A4), but where the functions ω t and β t of the time variable t are substituted by the functions ω x , β x of the spatial variable x. These processes are usually referred to as stable-like processes.
Since time-dependent generators lack many standard properties of standard generators of linear semigroups, let us stress for clarity that by saying that the process X s , x t is generated by the family (A4) we mean that its Markov transition operators
Φ s , t f ( x ) = E f ( X s , x t ) = E ( f ( X t ) | X s = x )
are well defined as operators in the space C 0 ( R + ) (of continuous functions on R + vanishing at zero and at infinity) for s t , they form a backward propagator, that is, they satisfy the chain rule Φ r , s Φ s , t = Φ r , t for r < s < t , the subspace C 0 1 ( R + ) of C 0 ( R + ) consisting of continuously differentiable functions with bounded derivatives is invariant under all Φ s , t and finally, for any f C 0 1 ( R + ) , the function f s = Φ s , t f satisfies the pseudo-differential equation in backward time
d d s f s = L s f s ,   s t .
For any continuous functions satisfying (A5) there exists a unique Markov process X s , x t satisfying all required conditions which follow from a general result on time nonhomogeneous extension of Lévy processes, see Proposition 7.1 of [1]. For our purposes, it is handy to have also a concrete representation of the propagator Φ s , t in terms of transition probabilities. Namely, Equation (A6) for L s from (A4) can be written in the pseudo-differential form as
d d s f s = ψ s ( i ) f s ,   s t ,
where the symbol ψ equals
ψ t ( p ) = ω t 0 e i r p 1 r 1 + β t d r .
Standard calculations show (see, e.g., Proposition 9.3.2 of [16]) that this is in fact a power function, that is
ψ t ( p ) = ω t Γ ( β t ) | p | β t exp { i π β t   sgn ( p ) / 2 } ,
where sgn ( p ) denotes the sign of p and Γ is the Euler Gamma function. Solving Equation (A7) via the Fourier transform method we find that its solution with the terminal function f t = f is given by the formula
f s ( x ) = 0 G t , s ( x y ) f ( y ) d y ,
where the Green function G that represents the transition probability density for the process X t , x r is given by the following integral:
G r , t ( x ) = 1 2 π R e i p x exp { t r ψ τ ( p ) d τ } d p
= 1 2 π R e i p x exp { t r ω τ Γ ( β τ ) | p | β τ exp { i π β τ   sgn ( p ) / 2 } d τ } dp .
From this representation it follows (see, e.g., Proposition 9.3.6 from [16]) that G r , t ( x ) is infinitely differentiable in t and x for t > 0 and x 0 and that for large x, G has the following asymptotic behavior:
G r , t ( x ) c 0 ( r t ) x 1 + min s β s ,   k x k G r , t ( x ) c k ( r t ) x 1 + k + min s β s ,   t G r , t ( x ) c t x 1 + β t ,
with some constants c 0 , c t and c k , k N .
It is also seen from (A11) that G r , t ( x ) tends to the Dirac function δ ( x ) , as r t 0 , and that
G r , t t = ψ t ( i ) G r , t ,   G r , t r = ψ r ( i ) G r , t .

Appendix C. Stationary Problems and Dynkin’s Martingales

Let us recall here a very standard piece of theory about Dynkin’s martingales in a way tailored to our purposes. Let ( X x , s t , S x , s t ) (where ( x , s ) denote the initial position) be a (time homogeneous) Markov process in Ω × R + with some metric space Ω generated by an operator L, so that the operators of the semigroup T t f ( x , s ) = E f ( X x , s t , S x , s t ) satisfy the equation T ˙ t f = L T t f for some space of continuous functions f. Then for any function f from this class the process
M f t = f ( X x , s t , S x , s t ) f ( x , s ) 0 t L f ( X x , s r , S x , s r ) d r
is a martingale, called Dynkin’s martingale.
If σ = σ x , s is a stopping time with a uniformly bounded expectation, one can apply Doob’s optional sampling theorem to conclude that
E [ f ( X x , s σ , S x , s σ ) f ( x , s ) 0 σ L f ( X x , s r , S x , s r ) d r ] = 0 .
In particular, if L f = 0 , it follows that
f ( x , s ) = E f ( X x , s σ , S x , s σ ) .
Let L can be written in the form
L f ( x , s ) = L 1 f ( x , s ) + 0 ( f ( x , s + r ) f ( x , s ) ) ν ( x , s , d r )
with L 1 being a Lévy-Khinchin-type operator acting on the variable x (with coefficients that may depend on x) and some Lévy kernel ν ( s , x , d r ) (that is sup s , x min ( r , 1 ) ν ( x , s , d r ) < ) such that also min s , x min ( r , 1 ) ν ( x , s , d r ) > 0 . For any T let us define a modification of the process ( X x , s t , S x , s t ) such that it stops once S ( x , s ) reaches T. Clearly so the modified process has the generator
L ˜ f ( x , s ) = L 1 f ( x , s ) + 0 T ( f ( x , s + r ) f ( x , s ) ) ν ( x , s , d r ) + ( f ( x , T ) f ( x , s ) ) T ν ( x , s , d r )
Due to the assumptions on ν , the stopping time σ , when S ( x , s ) reaches T has a uniformly bounded expectation. Hence we can apply Dynkin’s martingale to conclude that if L ˜ f = 0 and f ( x , T ) = ψ ( x ) with a given function ψ , then
f ( x , s ) = E ψ ( X x , s σ ) .
Thus Equation (A15) is the probabilistic representation for the solution to the boundary-value problem: L ˜ f = 0 and f ( x , T ) = ψ ( x ) . As a consequence of this formula one also gets the uniqueness of the solution to this boundary-value problem.

References

  1. Kolokoltsov, V.N. Nonlinear Markov Processes and Kinetic Equations; Cambridge Tracts in Mathematics; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2010; Volume 182. [Google Scholar]
  2. Zaslavsky, G.M. Fractional kinetic equation for Hamiltonian chaos. Phys. D Nonlinear Phenom. 1994, 76, 110–122. [Google Scholar] [CrossRef]
  3. Caputo, M. Linear models of dissipation whose Q is almost frequency independent—II. Geophys. J. Intern. 1967, 13, 529–539. [Google Scholar] [CrossRef]
  4. Dzherbashian, M.M.; Nersesian, A.B. Fractional derivatives and the Cauchy problem for differential equations of fractional order. Fract. Calc. Appl. Anal. 2020, 23, 1810–1836. [Google Scholar] [CrossRef]
  5. Kochubei, A.N.; Kondratiev, Y. Fractional kinetic hierarchies and intermittency. Kinet. Relat. Models 2017, 10, 725–740. [Google Scholar] [CrossRef]
  6. Kolokoltsov, V.N.; Malafeyev, O.A. Many Agent Games in Socio-Economic Systems: Corruption, Inspection, Coalition Building, Network Growth, Security; Springer Series in Operations Research and Financial Engineering; Springer Nature: Cham, Switzerland, 2019. [Google Scholar]
  7. Baleanu, D.; Diethelm, K.; Scalas, E.; Trujillo, J.J. Fractional Calculus: Models and Numerical Methods, 2nd ed.; Series on Complexity, Nonlinearity and Chaos; World Scientific Publishing: Singapore, 2017; Volume 5. [Google Scholar]
  8. Kiryakova, V. Generalized Fractional Calculus and Applications; Pitman Research Notes in Mathematics Series; Longman Scientific: Harlow, UK; John Wiley and Sons: New York, NY, USA, 1994; Volume 301. [Google Scholar]
  9. Podlubny, I. Fractional Differential Equations, An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Mathematics in Science and Engineering; Academic Press, Inc.: San Diego, CA, USA, 1999; Volume 198. [Google Scholar]
  10. Diethelm, K. The Analysis of Fractional Differential Equations; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  11. Kallenberg, O. Foundations of Modern Probability, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  12. Kolokoltsov, V.N. Markov Processes, Semigroups and Generators; DeGruyter Studies in Mathematics; DeGruyter: Berlin, Germany, 2011; Volume 38. [Google Scholar]
  13. Norris, J. Cluster Coagulation. Comm. Math. Phys. 2000, 209, 407–435. [Google Scholar] [CrossRef]
  14. Gnedenko, B.V.; Korolev, V.Y. Random Summation: Limit Theorems and Applications; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  15. Meerschaert, M.M.; Scheffler, H.-P. Limit Distributions for Sums of Independent Random Vectors; Wiley Series in Probability and Statistics; John Wiley and Son: Hoboken, NJ, USA, 2001. [Google Scholar]
  16. Kolokoltsov, V.N. Differential Equations on Measures and Functional Spaces; Birkhäuser Advanced Texts Basler Lehrbücher; Birkhäuser: Cham, Switzerland, 2019. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kolokoltsov, V.N.; Troeva, M. A New Approach to Fractional Kinetic Evolutions. Fractal Fract. 2022, 6, 49. https://doi.org/10.3390/fractalfract6020049

AMA Style

Kolokoltsov VN, Troeva M. A New Approach to Fractional Kinetic Evolutions. Fractal and Fractional. 2022; 6(2):49. https://doi.org/10.3390/fractalfract6020049

Chicago/Turabian Style

Kolokoltsov, Vassili N., and Marianna Troeva. 2022. "A New Approach to Fractional Kinetic Evolutions" Fractal and Fractional 6, no. 2: 49. https://doi.org/10.3390/fractalfract6020049

Article Metrics

Back to TopTop