Next Article in Journal
Modeling and Fatigue Characteristic Analysis of the Gear Flexspline of a Harmonic Reducer
Next Article in Special Issue
A Novel Tree Ensemble Model to Approximate the Generalized Extreme Value Distribution Parameters of the PM2.5 Maxima in the Mexico City Metropolitan Area
Previous Article in Journal
A Note on Exponential Stability for Numerical Solution of Neutral Stochastic Functional Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Branching Random Walks with Two Types of Particles on Multidimensional Lattices

by
Iuliia Makarova
1,†,
Daria Balashova
1,†,
Stanislav Molchanov
2,† and
Elena Yarovaya
1,*,†
1
Department of Probability Theory, Lomonosov Moscow State University, 119234 Moscow, Russia
2
Department of Mathematics and Statistics, National Research University Higher School of Economics, 101000 Moscow, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(6), 867; https://doi.org/10.3390/math10060867
Submission received: 4 February 2022 / Revised: 28 February 2022 / Accepted: 5 March 2022 / Published: 9 March 2022
(This article belongs to the Special Issue New Advances and Applications of Extreme Value Theory)

Abstract

:
We consider a continuous-time branching random walk on a multidimensional lattice with two types of particles and an infinite number of initial particles. The main results are devoted to the study of the generating function and the limiting behavior of the moments of subpopulations generated by a single particle of each type. We assume that particle types differ from each other not only by the laws of branching, as in multi-type branching processes, but also by the laws of walking. For a critical branching process at each lattice point and recurrent random walk of particles, the effect of limit spatial clustering of particles over the lattice is studied. A model illustrating epidemic propagation is also considered. In this model, we consider two types of particles: infected and immunity generated. Initially, there is an infected particle that can infect others. Here, for the local number of particles of each type at a lattice point, we study the moments and their limiting behavior. Additionally, the effect of intermittency of the infected particles is studied for a supercritical branching process at each lattice point. Simulations are presented to demonstrate the effect of limit clustering for the epidemiological model.

1. Introduction

The branching random walk (BRW) is one of the widely used tools for describing the processes associated with the birth, death, migration, and immigration of particles [1,2,3,4]. BRWs occur in population dynamics [5] and have numerous applications, e.g., in genetics [6] and demography [7].
The continuous-time BRWs presented in this paper are the two-type branching processes with walking of particles which takes place on the multidimensional lattice Z d , d N . We will mainly study the distribution of subpopulations generated by a single particle of each type. We assume that each particle can produce not only particles of the same type, but also particles of a different type. We also assume that particles cannot change their type. However, later in Section 6, we lift this condition for a particular case.
The problem of multi-type processes is one of the most interesting and really complicated problems in the theory of random processes. Historically, these processes were apparently first considered by Sevastyanov in [8]. He considered both discrete and continuous-time branching processes with a finite number of types and studied the limit distribution of particles under different conditions. Nowadays this problem is also studied in detail by various research groups. In [9,10], for example, the authors consider a more complicated problem in which the number of types is not finite but countable. They consider the subclass of Galton–Watson processes called lower Hessenberg branching processes. In contrast to our studies, they investigated processes with discrete time, and the random walk was considered in a strip. Some works of Vatutin with co-authors [11,12] are devoted to the problem of multi-type branching processes with discrete time and finite number of types in the random environment, but without walking of particles.
The structure of the paper is as follows. In Section 2, we describe a two-type BRW on Z d with infinitely many initial particles of both types. Here, we also define the main objects of the study. In Section 3, we study the first moments for subpopulations generated by a single particle of each type, and find their asymptotic behavior at the sites of Z d . To this end, we first obtain differential equations for the generating functions of subpopulations generated by a single particle of each type in Lemma 2 from Section 3.1. In order to find solutions for the corresponding equations, we turn to the equations for the Fourier transforms of the corresponding moments in Section 3.2 and show that the corresponding equations can be solved explicitly. This allows us to obtain explicit solutions (41) and (43) for the Fourier transforms of the corresponding moments. The results are then applied in Section 3.3 to find the asymptotics of the solutions for the Fourier transforms of the first moments of the subpopulations in the case of finite variance of the jumps. In Section 4, we study the second moments for the subpopulations. In Section 5, we study the particle clustering effect for BRWs under additional assumptions. Here, we assume that a two-type branching process is critical at every point on Z d and the distribution of an underlying random walk jumps has light tails. In Section 6, we discard one of the assumptions we imposed on our model in Section 2, Section 3 and Section 4 and instead assume that the particles of the first type can occasionally change their type to the second. This model can illustrate the situation related to epidemic spread, especially the spread of COVID-19 around the world. We refer to the first type of particles as infected and the second type of particles as immunity generated against COVID-19. Here, we assume that infected particles change their type after a short period of time so that they build up immunity. However, we assume that only one particle can do this after a short period of time. In Section 7, the algorithm for modeling the processes studied in Section 5 and Section 6 is presented and examined using the Python programming language.

2. Description of the Model

Here, we consider a population model with two types of particles. Let N i ( t , y ) , with i = 1 , 2 , be the number of particles of type i at time t > 0 at the site y Z d , d 1 . Then, the total population at the point y Z d at time t > 0 can be represented as the following column-vector whose components are non-negative integers:
N ( t , y ) = [ N 1 ( t , y ) , N 2 ( t , y ) ] T .
We assume that N i ( 0 , x ) = l i for i = 1 , 2 and all x Z d .
We assume that the evolution of particles of each type consists of several possibilities. First, a particle of type i, i = 1 , 2 , can die with mortality rate μ i 0 . Second, each particle of type i can produce new particles of either type. We denote by β i ( k , l ) 0 , k + l 2 , the rate at which a particle of type i produces k particles of type i = 1 and l particles of type i = 2 . Then, we define the corresponding branching generation function (without particle death) for i = 1 , 2 , see, e.g., [8]:
F i ( z 1 , z 2 ) = k + l 2 z 1 k z 2 l β i ( k , l ) .
Remark 1.
In our notation, μ i = β i ( 0 , 0 ) ( i = 1 , 2 ), and we do not consider the case when a particle of the type i = 1 , 2 can transform to a particle of type j = 1 , 2 , j i , and hence β 1 ( 0 , 1 ) = β 2 ( 1 , 0 ) = 0 . Additionally, we assume that
μ 1 + k + l 2 β 1 ( k , l ) = β 1 ( 1 , 0 ) > 0 , μ 2 + k + l 2 β 2 ( k , l ) = β 2 ( 0 , 1 ) > 0 ,
where β 1 ( 1 , 0 ) and β 2 ( 0 , 1 ) denote the cases when nothing happens with the particles.
Remember that particles can jump between points on the lattice. We assume that the probability of a jump from a point x to a point x + v during the small period d t is equal to ϰ i a i ( x , x + v ) d t + o ( d t ) , i = 1 , 2 . Here, ϰ i > 0 is the diffusion coefficient. In what follows, we consider a symmetric random walk, i.e., the case where a i ( x , y ) = a i ( y , x ) . Moreover, we assume that the random walk is homogeneous in space: a i ( x , x + v ) = a i ( v ) and irreducible such that span { v : a i ( v ) > 0 } = Z d . Moreover, a i ( 0 ) = 1 , v a i ( v ) = 0 .
Then, the migration operator has the form
( L i ψ ) ( x ) : = ( L i ψ ( · ) ) ( x ) : = ϰ i v ( ψ ( x + v ) ψ ( x ) ) a i ( v ) .
Let us introduce the subpopulations, which can be represented as the following column-vectors:
n 1 ( t , x , y ) = [ n 11 ( t , x , y ) , n 12 ( t , x , y ) ] T , n 2 ( t , x , y ) = [ n 21 ( t , x , y ) , n 22 ( t , x , y ) ] T .
Here, n i ( t , x , y ) is the vector of particles at the point y, generated by a single particle of type i which at time moment t = 0 was at the site x Z d . Its components n i j ( t , x , y ) are the numbers of particles at the point y of type j, generated by a single particle of type i at x at the moment t = 0 . Note that
n i j ( 0 , x , y ) = δ i ( j ) δ x ( y ) ,
where δ u ( v ) is the Kronecker function on Z d (or R ), that is if u, v Z d (or R )
δ u ( v ) = 1 , u = v ; 0 , u v .
Remark 2.
In the BRW under consideration we assume that both random walk and branching process are “homogeneous”. Namely, we assume that underlying random walk for each type of particles i = 1 , 2 is homogeneous in space, so that a i ( x , y ) = a i ( x y , 0 ) = a i ( x y ) . At the same time branching process (which includes death and birth of particles) is also “homogeneous” due to the fact that all intensities μ i , β i ( k , l ) , k + l 2 , i = 1 , 2 are constant and only depend on the type of particles (and independent of the lattice points).
Such a “homogeneity” leads to the simplifying the relations which describe the evolution of considered BRW. First of all, we conclude that for all t 0 the probability P n i j ( t , x , y ) = k equals to P n i j ( t , x y , 0 ) = k for all k Z + , so that
P n i j ( t , x , y ) = k P n i j ( t , x y , 0 ) = k , t 0 .
To prove this equality, we consider the process n i j ( t , x , y ) , which starts at some lattice point x, so that n i j ( 0 , x , y ) = δ i ( j ) δ x ( y ) . Then, for each trajectory
x x 1 x 2 x n 1 y
which describes the transition of a particle from point x to the point y, there exists the “trajectory with a shift of y”
x y x 1 y x 2 y x n 1 y 0
which describes the transition of a particle from point x y to the point 0. At the same time, due to the homogeneity in space of random walk, all transition intensities for both trajectories are equal ( a i ( x , y ) = a i ( x y , 0 ) = a i ( x y ) ), and because of the “branching homogeneity”, all branching intensities are equal at every lattice point. As n i j ( 0 , x y , 0 ) = n i j ( 0 , x , y ) , then for all t 0 we can conclude that Equation (6) is true.
From (6), we get that for all values which can be obtained from n i j ( t , x , y ) we have the same relations. In particular, for E n i j ( t , x , y ) we get
E n i j ( t , x , y ) E n i j ( t , x y , 0 ) , t 0 .
Finally, note that the similar relation [13] holds for transition probabilities p i ( t , x , y ) , i = 1 , 2 (definition will be given later)
p i ( t , x , y ) p i ( t , x y , 0 ) , t 0 .
The proof of the previous relation can be also obtained from the representation (47) from Section 3.3.
Remark 3.
From Equations (6) and (8) obtained in Remark 2 we conclude that to investigate considered BRW, which starts at the lattice point x it is sufficient to consider the case x = 0 . That can simplify the future narration.
Now, using notation from Equation (4), we obtain the following representation of the total population specified by Equation (1):
N ( t , y ) = x Z d s { 1 , , l 1 } n 1 , s ( t , x , y ) + x Z d m { 1 , , l 2 } n 2 , m ( t , x , y ) ,
where n i , l ( t , x , y ) is the subpopulation generated by the l-th particle at the point x at the time t = 0 . Note that both internal series in Equation (9) do not depend on the order of enumeration of particles.
The components of the vector N ( t , y ) are
N i ( t , y ) = x Z d s { 1 , , l 1 } n 1 i , s ( t , x , y ) + x Z d m { 1 , , l 2 } n 2 i , m ( t , x , y ) ,
where i = 1 , 2
Given z = ( z 1 , z 2 ) , let us introduce the generating function
Φ i ( t , x , y ; z ) = E z 1 n i 1 ( t , x , y ) z 2 n i 2 ( t , x , y ) .
This generating function specifies the evolution of a single particle of type i = 1 , 2 . Let us consider what can happen to this particle (later we can use it to obtain a differential equation for the generating functions). First, the initial particle can die at a point x with probability μ i d t + o ( d t ) (then the subpopulation of this particle disappears). Second, this particle can produce k particles of type 1 and l particles of type 2 with probability β i ( k , l ) d t + o ( d t ) . Third, the particle can jump from a point x to a point x + v with probability ϰ i a i ( v ) d t + o ( d t ) . Finally, nothing can happen to a particle during time d t . From this, we get
Lemma 1.
The generating functions Φ i ( t , x , y ; z ) , i = 1 , 2 , specified by Equation (11), satisfy the differential equation
Φ i ( t , x , y ; z ) t = ( L i Φ i ( t , · , y ; z ) ) ( x ) + μ i ( 1 Φ i ( t , x , y ; z ) ) + F i ( Φ 1 ( t , x , y ; z ) , Φ 2 ( t , x , y ; z ) ) k + l 2 β i ( k , l ) Φ i ( t , x , y ; z ) ;
Φ i ( 0 , x , y ; z ) = 1 , x y ; z i , x = y .
Proof. 
Given an i = 1 , 2 , consider the generating function Φ i ( t , x , y ; z ) at the time moment t + d t :
Φ i ( t + d t , x , y ; z ) = 1 ϰ i d t μ i d t k + l 2 β i ( k , l ) d t Φ i ( t , x , y ; z ) + ϰ i v Φ i ( t , x + v , y ; z ) a i ( v ) d t + μ i d t + k + l 2 β i ( k , l ) Φ 1 k ( t , x , y ; z ) Φ 2 l ( t , x , y ; z ) d t + o ( d t ) .
Then,
Φ i ( t + d t , x , y ; z ) Φ i ( t , x , y ; z ) = ϰ i + μ i + k + l 2 β i ( k , l ) Φ i ( t , x , y ; z ) d t + ϰ i v Φ i ( t , x + v , y ; z ) a i ( v ) d t + μ i d t + k + l 2 β i ( k , l ) Φ 1 k ( t , x , y ; z ) Φ 2 l ( t , x , y ; z ) d t + o ( d t ) .
Therefore,
Φ i ( t , x , y ; z ) t = ( L i Φ i ( t , · , y ; z ) ) ( x ) + μ i ( 1 Φ i ( t , x , y ; z ) ) + k + l 2 β i ( k , l ) ( Φ 1 k ( t , x , y ; z ) Φ 2 l ( t , x , y ; z ) Φ i ( t , x , y ; z ) ) .
Here, according to Equation (2), we have
k + l 2 β i ( k , l ) Φ 1 k ( t , x , y ; z ) Φ 2 l ( t , x , y ; z ) = F i ( Φ 1 ( t , x , y ; z ) , Φ 2 ( t , x , y ; z ) ) ,
and hence
Φ i ( t , x , y ; z ) t = ( L i Φ i ( t , · , y ; z ) ) ( x ) + μ i ( 1 Φ i ( t , x , y ; z ) ) + F i ( Φ 1 ( t , x , y ; z ) , Φ 2 ( t , x , y ; z ) ) k + l 2 β i ( k , l ) Φ i ( t , x , y ; z ) ) .
The initial condition for the latter equation follows from Equation (5):
Φ i ( 0 , x , y ; z ) = E z 1 n i 1 ( 0 , x , y ) z 2 n i 2 ( 0 , x , y ) = E z 1 δ i ( 1 ) δ x ( y ) z 2 δ i ( 2 ) δ x ( y ) = z 1 δ i ( 1 ) δ x ( y ) z 2 δ i ( 2 ) δ x ( y ) = z i δ x ( y ) .
So, we obtain the desired Equation (12), Equation (13), which completes the proof of Lemma 1. □
Remark 4.
If we assume
β i ( k , l ) c 0 k + l k ! l ! , k + l 2 ,
for some c 0 > 0 , then the Carleman condition is hold [14], which guarantees that for each i = 1 , 2 the function F i ( z 1 , z 2 ) from Equation (2) is an analytic function in the strip | z i 1 | < δ 0 for some δ 0 > 0 [15].

3. The First Moments

Recall that the goal of our article is to study the moments of the random variables n i j ( t , x , y ) , i , j = 1 , 2 . In this section, we will consider the first moments. For this purpose, in Section 3.1, in Lemma 2, we obtain differential equations for the generating functions of subpopulations generated by a single particle of each type. In order to find solutions to the corresponding equations, we turn to the equations for the Fourier transforms of the corresponding moments in Section 3.2 and show that the corresponding equations can be solved explicitly. This allows us to obtain explicit solutions (41) and (43) for the Fourier transforms of the corresponding moments. The results are then applied in Section 3.3 to find the asymptotics of the solutions for the Fourier transforms of the first moments of subpopulations in the case of finite variance of the jumps.

3.1. Differential Equations for Moments

Define m i j ( 1 ) ( t , x , y ) = E n i j ( t , x , y ) and prove the following lemma playing important role in what follows.
Lemma 2.
Let Equation (14) be true. Then, for each i , j = 1 , 2 , the functions m i j ( 1 ) ( t , x , y ) satisfy the differential equation
m i j ( 1 ) ( t , x , y ) t = ( L i m i j ( 1 ) ( t , · , y ) ) ( x ) μ i m i j ( 1 ) ( t , x , y ) k + l 2 β i ( k , l ) m i j ( 1 ) ( t , x , y ) + k + l 2 β i ( k , l ) ( k m 1 j ( 1 ) ( t , x , y ) + l m 2 j ( 1 ) ( t , x , y ) ) ;
m i j ( 1 ) ( 0 , x , y ) = δ i ( j ) δ x ( y ) .
Proof. 
Differentiating Equation (11) with respect to z j , j = 1 , 2 , we get
Φ i ( t , x , y ; z ) z j = E z 1 n i 1 ( t , x , y ) z 2 n i 2 ( t , x , y ) z j = E n i j ( t , x , y ) z 1 n i 1 ( t , x , y ) δ j ( 1 ) z 2 n i 2 ( t , x , y ) δ j ( 2 ) ,
from which, by taking z = ( z 1 , z 2 ) = ( 1 , 1 ) , we obtain
Φ i ( t , x , y ; z ) z j | z = ( 1 , 1 ) = E n i j ( t , x , y ) = m i j ( 1 ) ( t , x , y ) .
Now, differentiating Equation (12) over z j we can write
2 Φ i ( t , x , y ; z ) t z j = z j ( ( L i Φ i ( t , · , y ; z ) ) ( x ) + μ i ( 1 Φ i ( t , x , y ; z ) ) + k + l 2 β i ( k , l ) ( Φ 1 k ( t , x , y ; z ) Φ 2 l ( t , x , y ; z ) Φ i ( t , x , y ; z ) ) ) =   L i ( z j Φ i ( t , · , y ; z ) ) ( x ) μ i z j Φ i ( t , x , y ; z ) + k + l 2 β i ( k , l ) ( k z j Φ 1 ( t , x , y ; z ) Φ 1 k 1 ( t , x , y ; z ) Φ 2 l ( t , x , y ; z ) + l Φ 1 k ( t , x , y ; z ) z j Φ 2 ( t , x , y ; z ) Φ 2 l 1 ( t , x , y ; z ) z j Φ i ( t , x , y ; z ) ) .
Again, by taking z = ( z 1 , z 2 ) = ( 1 , 1 ) in the above formula and applying Equation (17), we find that the left-hand part of the last equation takes the form:
2 Φ i ( t , x , y ; z ) t z j | z = ( 1 , 1 ) = m i j ( 1 ) ( t , x , y ) t ;
while the right-hand part of the same equation equals
( L i ( z j Φ i ( t , · , y ; z ) ) ( x ) μ i z j Φ i ( t , x , y ; z ) + k + l 2 β i ( k , l ) ( k z j Φ 1 ( t , x , y ; z ) × Φ 1 k 1 ( t , x , y ; z ) Φ 2 l ( t , x , y ; z ) + l Φ 1 k ( t , x , y ; z ) z j Φ 2 ( t , x , y ; z ) × Φ 2 l 1 ( t , x , y ; z ) z j Φ i ( t , x , y ; z ) ) ) | z = ( 1 , 1 ) = ( L i m i j ( 1 ) ( t , · , y ) ) ( x ) μ i m i j ( 1 ) ( t , x , y ) + k + l 2 β i ( k , l ) ( k 1 ) m 1 j ( 1 ) ( t , x , y ) + l m 2 j ( 1 ) ( t , x , y ) .
By combining Equation (18) and (19), we obtain
m i j ( 1 ) ( t , x , y ) t = ( L i m i j ( 1 ) ( t , · , y ) ) ( x ) μ i m i j ( 1 ) ( t , x , y ) k + l 2 β i ( k , l ) m i j ( 1 ) ( t , x , y ) + k + l 2 β i ( k , l ) ( k m 1 j ( 1 ) ( t , x , y ) + l m 2 j ( 1 ) ( t , x , y ) ) .
The initial condition for the latter equation can be found from Equation (5):
m i j ( 1 ) ( 0 , x , y ) = E n i j ( 0 , x , y ) = E δ i ( j ) δ x ( y ) = δ i ( j ) δ x ( y ) .
Lemma 2 is proved. □
Remark 5.
From Lemma 2 and the general theory of differential equations (in Banach spaces), for any i , j = 1 , 2 one can easily obtain the inequality
| m i j ( 1 ) ( t , x , y ) | < f o r a l l t 0 ,
(see, e.g., the proof of similar facts in [1,16]), but in order not to overload the exposition, its elementary proof is given in Section 3.2.
Nevertheless, let us explain the main ideas of the corresponding proof. Equation (15) with initial condition (16) can be treated as a linear differential equation in a Banach space whose right-hand side (for each t and y) is a linear bounded operator acting in any of the spaces l p ( Z d ) , p 1 . Since in this case the initial condition m i j ( 1 ) ( 0 , x , y ) for each y as a function of the variable x also belongs to each of the spaces l p ( Z d ) , p 1 , then, as shown, for example, in [1,16], m i j ( 1 ) ( t , x , y ) (for each t and y) as a function of the variable x also belongs to each of the spaces l p ( Z d ) , p 1 , and is thus bounded.
In Lemma 2, we have obtained the differential equations for the subpopulations generated by a single particle of each type. Now, we want to obtain the differential equation for the full population N ( t , y ) .
Define m i ( 1 ) ( t , x , y ) = E n i ( t , x , y ) , i = 1 , 2 , and rewrite the Equations (15) from Lemma 2 in the following form:
m 1 ( 1 ) ( t , x , y ) t = ( L 1 m 1 ( 1 ) ( t , · , y ) ) ( x ) + k + l 2 l β 1 ( k , l ) m 2 ( 1 ) ( t , x , y ) + ( k + l 2 ( k 1 ) β 1 ( k , l ) μ 1 ) m 1 ( 1 ) ( t , x , y ) ,
m 2 ( 1 ) ( t , x , y ) t = ( L 2 m 2 ( 1 ) ( t , · , y ) ) ( x ) + k + l 2 k β 2 ( k , l ) m 1 ( 1 ) ( t , x , y ) + k + l 2 ( l 1 ) β 2 ( k , l ) μ 2 m 2 ( 1 ) ( t , x , y ) ,
with the initial conditions
m 1 ( 1 ) ( 0 , x , y ) = [ δ x ( y ) , 0 ] T , m 2 ( 1 ) ( 0 , x , y ) = [ 0 , δ x ( y ) ] T .
Let us denote n ( t , x , y ) = [ n 1 ( t , x , y ) , n 2 ( t , x , y ) ] T and m ( 1 ) ( t , x , y ) = E n ( t , x , y ) . Then, the pair of equations, Equations (20) and (21), can be rewritten in a more compact form:
m ( 1 ) ( t , x , y ) t = ( L 1 m 1 ( 1 ) ( t , · , y ) ) ( x ) ( L 2 m 2 ( 1 ) ( t , · , y ) ) ( x ) + V m 1 ( 1 ) m 2 ( 1 ) ,
where V is the matrix
V = μ 1 + k + l 2 ( k 1 ) β 1 ( k , l ) k + l 2 l β 1 ( k , l ) k + l 2 k β 2 ( k , l ) μ 2 + k + l 2 ( l 1 ) β 2 ( k , l ) .
The above calculus let us the opportunity to get the equation for the full population at the site y Z d . Using the representation of N i ( t , y ) , i = 1 , 2 , in Equation (10) we obtain for m ( 1 ) ( t , y ) : = E N ( t , y ) = [ m 1 ( 1 ) ( t , y ) , m 2 ( 1 ) ( t , y ) ] T the following formula:
m i ( 1 ) ( t , y ) = E N i ( t , y ) = x Z d s { 1 , , l 1 } m 1 i , s ( 1 ) ( t , x , y ) + x Z d m { 1 , , l 2 } m 2 i , m ( 1 ) ( t , x , y ) .
Taking the partial derivative over the parameter t for each component of m ( 1 ) ( t , y ) we derive from the above formula the equation:
m ( 1 ) ( t , y ) t = x Z d s { 1 , , l 1 } m 1 , s ( 1 ) ( t , x , y ) t + x Z d m { 1 , , l 2 } m 2 , m ( 1 ) ( t , x , y ) t ,
where m i , l ( 1 ) ( t , x , y ) : = E n i , l ( t , x , y ) .
Formula (22) describes how the behavior of the full population depends on the behavior of each subpopulation. Later on, we will study the behavior of subpopulations in more details.

3.2. Solutions of Differential Equations for the First Moments

In this section, we will find an explicit form of the solutions of the differential equations obtained in Lemma 2. To find these solutions, we will use the discrete Fourier transform. For simplicity, we recall that the Fourier transform f ^ ( θ ) of a function f ( u ) is defined as
f ^ ( θ ) = u Z d e i ( θ , u ) f ( u ) , θ [ π , π ] d ,
where ( · , · ) is the dot product in R d × R d , while the inverse Fourier transform is of the form
f ( u ) = 1 2 π d [ π , π ] d f ^ ( θ ) e i ( θ , u ) d θ .
By applying the Fourier transform (23) to Equations (20) and (21), we obtain the equations
m ^ 1 ( 1 ) ( t , θ , y ) t = ϰ 1 a ^ 1 ( θ ) m ^ 1 ( 1 ) ( t , θ , y ) + k + l 2 l β 1 ( k , l ) m ^ 2 ( 1 ) ( t , θ , y ) + k + l 2 ( k 1 ) β 1 ( k , l ) μ 1 m ^ 1 ( 1 ) ( t , θ , y ) ,
m ^ 2 ( 1 ) ( t , θ , y ) t = ϰ 2 a ^ 2 ( θ ) m ^ 2 ( 1 ) ( t , θ , y ) + k + l 2 k β 2 ( k , l ) m ^ 1 ( 1 ) ( t , θ , y ) + k + l 2 ( l 1 ) β 2 ( k , l ) μ 2 m ^ 2 ( 1 ) ( t , θ , y ) ,
with the initial conditions
m ^ 1 ( 1 ) ( 0 , θ , y ) = [ e i ( θ , y ) , 0 ] T , m ^ 2 ( 1 ) ( 0 , θ , y ) = [ 0 , e i ( θ , y ) ] T .
To simplify formulas (25) and (26), let us introduce the following notations:
a ( θ ) = ϰ 1 a ^ 1 ( θ ) + k + l 2 ( k 1 ) β 1 ( k , l ) μ 1 ;
b = k + l 2 l β 1 ( k , l ) 0 ;
c = k + l 2 k β 2 ( k , l ) 0 ;
d ( θ ) = ϰ 2 a ^ 2 ( θ ) + k + l 2 ( l 1 ) β 2 ( k , l ) μ 2 .
With the usage of these notations, Equations (25) and (26) can be represented in a more compact form:
m ^ 1 ( 1 ) ( t , θ , y ) t = a ( θ ) m ^ 1 ( 1 ) ( t , θ , y ) + b m ^ 2 ( 1 ) ( t , θ , y ) , m ^ 1 ( 1 ) ( 0 , θ , y ) = [ e i ( θ , y ) , 0 ] T
m ^ 2 ( 1 ) ( t , θ , y ) t = c m ^ 1 ( 1 ) ( t , θ , y ) + d ( θ ) m ^ 2 ( 1 ) ( t , θ , y ) , m ^ 2 ( 1 ) ( 0 , θ , y ) = [ 0 , e i ( θ , y ) ] T .
To get a solution for this last system of differential equations, let us recall some facts from the theory of two-dimensional linear differential equations and perform some auxiliary calculations.
Remark 6.
Represent Equations (31) and (32) arising in our treatment in a conventional form of a system of linear differential equations with two variables (see details in [17]):
d u ( t ) d t = a u ( t ) + b v ( t ) , u ( 0 ) = u 0 ,
d v ( t ) d t = c u ( t ) + d v ( t ) , v ( 0 ) = v 0 ,
assuming that a , b , c and d here are some numerical parameters. In order to “keep the connection” with Equations (31) and (32) and not consider options unnecessary in the future, we will assume throughout this remark that
b , c 0 .
As is known (see, e.g., [17] or some other handbook on the theory of differential equations), the behavior of solutions of Equations (33) and (34) is completely determined, in a sense, by the roots of characteristic equation of the matrix of coefficients standing in the right-hand side of Equations (33) and (34):
λ 2 ( a + d ) λ + ( a d b c ) = 0 .
These roots are as follows
λ 1 = a + d + D 2 , λ 2 = a + d D 2 w h e r e D = ( a d ) 2 + 4 b c .
Note that under the assumption b , c 0 , the discriminant D is non-negative, and therefore the roots λ 1 , 2 are real.
Let D = 0 ; this can be if and only if
a = d a n d b = 0 o r c = 0 .
In this case, λ 1 = λ 2 coincide with each other and moreover λ 1 = λ 2 = a = d . Then (see, e.g., [17]), the solution u ( t ) of Equations (33) and (34) is a linear combination of the functions e λ t and t e λ t :
u ( t ) = ( C 1 + C 2 t ) e λ t .
The solution v ( t ) can be expressed likewise.
Let D 0 ; this can be if and only if
a d o r b 0 a n d c 0 .
In this case, λ 1 λ 2 and (see, e.g., [17]) the solution u ( t ) of Equations (33) and (34) is a linear combination of the functions e λ 1 t and e λ 2 t :
u ( t ) = C 1 e λ 1 t + C 2 e λ 2 t .
The solution v ( t ) can be expressed likewise.
Let us write out the precise forms of the solutions u ( t ) and v ( t ) of Equations (33) and (34); they will be needed in the further analysis. Consider the following combinations of the parameters b and c:
b = 0 , c 0 o r b 0 , c = 0 o r b > 0 , c > 0 ,
which exhaust all possible combinations of these parameters under condition b , c 0 . The fact that the first two of these conditions intersect does not interfere with further considerations. We also mention that the case b = c = 0 is covered by both of the first two cases.
Case b = 0 , c 0 .
Here, u ( t ) can be found directly from Equation (33):
u ( t ) = e a t u 0 .
To find v ( t ) it suffices to substitute the obtained expression for u ( t ) into Equation (34) and to solve the resulted non-homogeneous linear differential equation:
v ( t ) = e d t v 0 + 0 t e d ( t s ) c e a s u 0 d s .
The value of the integral in the right-hand side of the obtained equality is different depending on whether the equality a = d holds. Direct evaluation shows that
v ( t ) = v 0 c u 0 a d e d t + c u 0 a d e a t , a d ( v 0 + c u 0 t ) e d t , a = d .
Case b 0 , c = 0 .
This case is treated similarly to the previous one, and we get:
u ( t ) = u 0 b v 0 d a e a t + b v 0 d a e d t , a d ( u 0 + b v 0 t ) e a t , a = d , v ( t ) = e d t v 0 .
Case b > 0 , c > 0 .
In this case, both roots λ 1 , 2 of Equation (35) are different, and moreover λ 1 > λ 2 . In order to find the solutions u ( t ) and v ( t ) of Equations (33) and (34) let us first take t = 0 in Equation (36). Then, we obtain the following equation for the initial condition u ( 0 ) :
C 1 + C 2 = u ( 0 ) = u 0 .
Further, find b v ( t ) from Equation (33):
b v ( t ) = u ( t ) a u ( t ) = C 1 ( λ 1 a ) e λ 1 t + C 2 ( λ 2 a ) e λ 2 t .
Using the obtained expression we will get the equation for b v ( 0 ) :
C 1 ( λ 1 a ) + C 2 ( λ 2 a ) = b v ( 0 ) = b v 0 .
By solving the resulting system of Equations (37) and (38), we get
C 1 = b v 0 + u 0 ( a λ 2 ) λ 1 λ 2 , C 2 = u 0 ( λ 1 a ) b v 0 λ 1 λ 2 ,
from which
u ( t ) = b v 0 + u 0 ( a λ 2 ) λ 1 λ 2 e λ 1 t + u 0 ( λ 1 a ) b v 0 λ 1 λ 2 e λ 2 t , v ( t ) = ( λ 1 a ) b b v 0 + u 0 ( a λ 2 ) λ 1 λ 2 e λ 1 t + ( λ 2 a ) b u 0 ( λ 1 a ) b v 0 λ 1 λ 2 e λ 2 t .
Here, the last equation can be simplified by noting that
( λ 2 a ) ( λ 1 a ) b = c , λ 1 λ 2 = D .
As a result, we obtain:
u ( t ) = 1 D ( b v 0 + u 0 ( a λ 2 ) e λ 1 t + u 0 ( λ 1 a ) b v 0 e λ 2 t ) , v ( t ) = 1 D ( v 0 ( λ 1 a ) + c u 0 e λ 1 t c u 0 ( λ 2 a ) v 0 e λ 2 t ) .
Now, we are able to write out the solutions of Equations (31) and (32). For this, it suffices to note that although in the reasoning of Remark 6 it was implicitly assumed that the functions u ( t ) and v ( t ) are scalar, but in fact this assumption was never used anywhere, and the functions u ( t ) and v ( t ) may be assumed vector-valued, for example, such as m ^ i ( 1 ) ( t , θ , y ) in Equations (31) and (32).
One should also pay attention to the fact that in Equations (31) and (32), in contrast to Equations (33) and (34), the parameters a and d are actually functions of the variable θ , that is, a = a ( θ ) and d = d ( θ ) , and then the values λ 1 , λ 2 and D are also functions of the variable θ :
λ 1 ( θ ) = a ( θ ) + d ( θ ) + D ( θ ) 2 , λ 2 ( θ ) = a ( θ ) + d ( θ ) D ( θ ) 2 ,
and
D ( θ ) = a ( θ ) d ( θ ) 2 + 4 b c .
Considering the above, we can write out the solutions m ^ 1 ( 1 ) ( t , θ , y ) and m ^ 2 ( 1 ) ( t , θ , y ) of Equations (31) and (32) using the appropriate initial conditions.
Case b = 0 , c 0 .
Here,
m ^ 1 ( 1 ) ( t , θ , y ) = e a ( θ ) t m ^ 1 ( 1 ) ( 0 , θ , y ) , m ^ 2 ( 1 ) ( t , θ , y ) = m ^ 2 ( 1 ) ( 0 , θ , y ) c a ( θ ) d ( θ ) m ^ 1 ( 1 ) ( 0 , θ , y ) e d ( θ ) t + + c a ( θ ) d ( θ ) m ^ 1 ( 1 ) ( 0 , θ , y ) e a ( θ ) t , if θ s . t . a ( θ ) d ( θ ) , ( m ^ 2 ( 1 ) ( 0 , θ , y ) + c m ^ 1 ( 1 ) ( 0 , θ , y ) t ) e d t , if θ s . t . a ( θ ) = d ( θ ) .
Case b 0 , c = 0 .
Here,
m ^ 1 ( 1 ) ( t , θ , y ) = m ^ 1 ( 1 ) ( 0 , θ , y ) b d ( θ ) a ( θ ) m ^ 2 ( 1 ) ( 0 , θ , y ) e a ( θ ) t + + b d ( θ ) a ( θ ) m ^ 2 ( 1 ) ( 0 , θ , y ) e d ( θ ) t , if θ s . t . a ( θ ) d ( θ ) , ( m ^ 1 ( 1 ) ( 0 , θ , y ) + b m ^ 2 ( 1 ) ( 0 , θ , y ) t ) e a ( θ ) t , if θ s . t . a ( θ ) = d ( θ ) , m ^ 2 ( 1 ) ( t , θ , y ) = e d ( θ ) t m ^ 2 ( 1 ) ( 0 , θ , y ) .
Case b > 0 , c > 0 .
Here,
m ^ 1 ( 1 ) ( t , θ , y ) = 1 D ( θ ) ( a ( θ ) λ 2 ( θ ) ) m ^ 1 ( 1 ) ( 0 , θ , y ) + b m ^ 2 ( 1 ) ( 0 , θ , y ) e λ 1 ( θ ) t + + 1 D ( θ ) ( λ 1 ( θ ) a ( θ ) ) m ^ 1 ( 1 ) ( 0 , θ , y ) b m ^ 2 ( 1 ) ( 0 , θ , y ) e λ 2 ( θ ) t m ^ 2 ( 1 ) ( t , θ , y ) = 1 D ( θ ) c m ^ 1 ( 1 ) ( 0 , θ , y ) + ( λ 1 ( θ ) a ( θ ) ) m ^ 2 ( 1 ) ( 0 , θ , y ) e λ 1 ( θ ) t + + 1 D ( θ ) c m ^ 1 ( 1 ) ( 0 , θ , y ) + ( λ 2 ( θ ) a ( θ ) ) m ^ 2 ( 1 ) ( 0 , θ , y ) e λ 2 ( θ ) t .
Finally, it needs to be remembered that each of the functions m ^ 1 ( 1 ) ( t , θ , y ) and m ^ 2 ( 1 ) ( t , θ , y ) is a two-component vector-function. Therefore, taking the components of the functions m ^ 1 ( 1 ) ( t , θ , y ) and m ^ 2 ( 1 ) ( t , θ , y ) for the obtained above three cases, we will obtain three cases of formulas for m ^ 11 ( 1 ) ( t , θ , y ) , m ^ 12 ( 1 ) ( t , θ , y ) , m ^ 21 ( 1 ) ( t , θ , y ) and m ^ 22 ( 1 ) ( t , θ , y ) :
Case b = 0 , c 0 .
Here,
m ^ 11 ( 1 ) ( t , θ , y ) = e i ( θ , y ) e a ( θ ) t ; m ^ 21 ( 1 ) ( t , θ , y ) = c a ( θ ) d ( θ ) e i ( θ , y ) e a ( θ ) t e d ( θ ) t , if θ s . t . a ( θ ) d ( θ ) , c t e i ( θ , y ) e a ( θ ) t , if θ s . t . a ( θ ) = d ( θ ) ; m ^ 12 ( 1 ) ( t , θ , y ) = 0 ; m ^ 22 ( 1 ) ( t , θ , y ) = e i ( θ , y ) e d ( θ ) t .
Case b 0 , c = 0 .
Here,
m ^ 11 ( 1 ) ( t , θ , y ) = e i ( θ , y ) e a ( θ ) t ; m ^ 21 ( 1 ) ( t , θ , y ) = 0 ; m ^ 12 ( 1 ) ( t , θ , y ) = b a ( θ ) d ( θ ) e i ( θ , y ) e a ( θ ) t e d ( θ ) t , if θ s . t . a ( θ ) d ( θ ) , b t e i ( θ , y ) e d ( θ ) t , if θ s . t . a ( θ ) = d ( θ ) ; m ^ 22 ( 1 ) ( t , θ , y ) = e i ( θ , y ) e d ( θ ) t .
Case b > 0 , c > 0 .
Here,
m ^ 11 ( 1 ) ( t , θ , y ) = 1 D ( θ ) e i ( θ , y ) ( a ( θ ) λ 2 ( θ ) e λ 1 ( θ ) t + λ 1 ( θ ) a ( θ ) e λ 2 ( θ ) t ) ; m ^ 21 ( 1 ) ( t , θ , y ) = c D ( θ ) e i ( θ , y ) ( e λ 1 ( θ ) t e λ 2 ( θ ) t ) ; m ^ 12 ( 1 ) ( t , θ , y ) = b D ( θ ) e i ( θ , y ) ( e λ 1 ( θ ) t e λ 2 ( θ ) t ) ; m ^ 22 ( 1 ) ( t , θ , y ) = 1 D ( θ ) e i ( θ , y ) ( λ 1 ( θ ) a ( θ ) e λ 1 ( θ ) t + λ 2 ( θ ) a ( θ ) e λ 2 ( θ ) t ) .
Remark 7.
The cases b = 0 , c > 0 or b > 0 , c = 0 can describe the situation when particles of one type cannot produce offsprings of both types. This can have the real-life interpretation: we have some species which have both “male” and “female” individuals and the “male” individuals cannot produce offspring.
Remark 8.
The attentive reader will notice that our constructions are redundant in a sense. In the middle of this section, we made an effort to go from equations for the functions m i j ( 1 ) ( t , x , y ) , i , j = 1 , 2 , to more general equations for the functions m 1 ( 1 ) ( t , x , y ) and m 2 ( 1 ) ( t , x , y ) , and their Fourier transforms m ^ 1 ( 1 ) ( t , θ , y ) and m ^ 2 ( 1 ) ( t , θ , y ) . Then, at the end of this section, we again return to the functions m i j ( 1 ) ( t , x , y ) , i , j = 1 , 2 (or rather to their Fourier images m ^ i j ( 1 ) ( t , θ , y ) , i , j = 1 , 2 ). We emphasize once again that from a technical point of view, this method of research is redundant, however, in our opinion, it contributes to a deeper understanding of the “nature of things” when analyzing the behavior of the functions m i j ( 1 ) ( t , x , y ) , i , j = 1 , 2 .
So, we have found the solutions of Equations (31) and (32). Applying the inverse Fourier transform (24) to Equations (43), we can get the solutions for m i j ( 1 ) ( t , x , y ) , i , j = 1 , 2 . Later, we will find the asymptotic behavior of each subpopulation m i j ( 1 ) ( t , x , y ) in one particular case.

3.3. Asymptotic Behavior in the Case of Finite Variance of Jumps

In the previous section, in Equations (41)–(43), we found the solutions for the Fourier transform of the first moments of the subpopulations m ^ i j ( 1 ) ( t , θ , y ) , i , j = 1 , 2 . In this section, we obtain their asymptotic behavior in one particular case that is natural for applications.
Remark 9.
Consider the parabolic problem
p ( t , x , y ) t = ( L i p ( t , · , y ) ) ( x ) , p ( 0 , x , y ) = δ x ( y )
where operators L i , i = 1 , 2 , are defined in (3).
By applying the discrete Fourier transform (23) to Equation (44), we find that the Fourier image p ^ ( t , θ , y ) of the function p ( t , x , y ) satisfies the Cauchy problem
p ^ ( t , θ , y ) t = ϰ a ^ i ( θ ) p ^ ( t , θ , y ) , p ^ ( 0 , θ , y ) = e i ( θ , y ) ,
whose solution can be found explicitly:
p ^ ( t , θ , y ) = e i ( θ , y ) e ϰ a ^ i ( θ ) t .
Applying the inverse Fourier transform to Equation (46), we obtain the solution of Equation (44):
p ( t , x , y ) = 1 ( 2 π ) d [ π , π ] d e ϰ a ^ i ( θ ) t + i ( θ , y x ) d θ
Besides, from here we can see that p ( t , x , y ) depends only on x y , which gives an alternative proof to the corresponding assertion from Remark 2.
Now, turn to the problem of finding m i j ( 1 ) ( t , x , y ) , i , j = 1 , 2 . Let a 1 ( v ) = a 2 ( v ) = : a * ( v ) for all v Z d and ϰ 1 = ϰ 2 = ϰ , so the migration operators from Equation (3) are equal. Besides, consider the case when underlying random walk has finite variance of jumps, so
v 0 a * ( v ) | v | 2 < ,
where | · | is the vector norm in R d .
As was demonstrated, e.g., in [1] under condition (48), the solution of the parabolic problem (44) has, for each x , y Z d , the following asymptotics:
p ( t , x , y ) γ d t d / 2 , t ,
where
γ d = ( 2 π ) d | det ( ϰ a ^ * ( 0 ) ) | 1 / 2
is a constant depending on the lattice dimension. For a more detailed description of asymptotics (49), including the form of the reminder term, see [18].
Let us now apply the above reasoning to Equations (31) and (32). Note that in the case where the migration operators L 1 and L 2 defined by Equation (3) coincide, i.e., L 1 = L 2 , we can refine the representation of Equation (39) for λ 1 ( θ ) and λ 2 ( θ ) by using Equation (40), which yields:
λ 1 , 2 ( θ ) = a ( θ ) + d ( θ ) 2 ± ( ( a ( θ ) d ( θ ) ) 2 + 4 b c ) 1 / 2 2 = ϰ a ^ ( θ ) + C 1 ± C 2 ,
where
C 1 = a ( θ ) + d ( θ ) 2 ϰ a ^ ( θ ) , C 2 = ( ( a ( θ ) d ( θ ) ) 2 + 4 b c ) 1 / 2 2 .
Replacing a ( θ ) , b , c and d ( θ ) in Equation (51) by their values given by Equations (27)–(30) we obtain the following representations of C 1 and C 2 :
C 1 = 1 2 k + l 2 ( k 1 ) β 1 ( k , l ) + ( l 1 ) β 2 ( k , l ) ( μ 1 + μ 2 ) ; C 2 = 1 2 [ k + l 2 ( k 1 ) β 1 ( k , l ) ( l 1 ) β 2 ( k , l ) ( μ 1 μ 2 ) 2 + 4 k + l 2 l β 1 ( k , l ) k + l 2 k β 2 ( k , l ) ] 1 / 2 .
Let us denote
r 1 = k + l 2 ( k 1 ) β 1 ( k , l ) μ 1 , r 2 = k + l 2 ( l 1 ) β 2 ( k , l ) μ 2 .
Then, in the case when ϰ 1 = ϰ 2 = ϰ , b = 0 , c > 0 (or b > 0 , c = 0 ) due to Equations (27) and (30), the following relation holds:
a ( θ ) d ( θ ) = r 1 r 2 for all θ ,
that is, the difference a ( θ ) d ( θ ) does not depend on θ . This means that either a ( θ ) d ( θ ) = 0 for all values of θ or a ( θ ) d ( θ ) 0 also for all values of θ . Moreover,
a ( θ ) d ( θ ) = 0 for all θ r 1 r 2 = 0 C 2 = 0 .
Consequently, in the case r 1 = r 2 we have not only that a 1 ( v ) = a 2 ( v ) for all v Z d , but also a ^ 1 ( θ ) = a ^ 2 ( θ ) for all θ [ π , π ] d .
Thus, from Equations (41)–(43) we have for t (we prefer to consider the case b = c = 0 separately from other cases):
Case b = 0 , c = 0 .
Here, for each x , y Z d ,
m 11 ( 1 ) ( t , x , y ) e r 1 t γ d t d / 2 ; m 21 ( 1 ) ( t , x , y ) = 0 ; m 12 ( 1 ) ( t , x , y ) = 0 ; m 22 ( 1 ) ( t , x , y ) e r 2 t γ d t d / 2 .
Case b = 0 , c > 0 .
Here, for each x , y Z d ,
m 11 ( 1 ) ( t , x , y ) e r 1 t γ d t d / 2 ; m 21 ( 1 ) ( t , x , y ) c e r 1 t γ d t d / 2 1 , if C 2 = 0 , c r 1 r 2 e r 1 t e r 2 t γ d t d / 2 , if C 2 0 ; m 12 ( 1 ) ( t , x , y ) = 0 ; m 22 ( 1 ) ( t , x , y ) e r 2 t γ d t d / 2 .
Case b > 0 , c = 0 .
Here, for each x , y Z d ,
m 11 ( 1 ) ( t , x , y ) e r 1 t γ d t d / 2 ; m 21 ( 1 ) ( t , x , y ) = 0 ; m 12 ( 1 ) ( t , x , y ) b e r 2 t γ d t d / 2 1 , if C 2 = 0 , b r 1 r 2 e r 1 t e r 2 t γ d t d / 2 , if C 2 0 ; m 22 ( 1 ) ( t , x , y ) e r 2 t γ d t d / 2 .
Case b > 0 , c > 0 .
Here, for each x , y Z d ,
m 11 ( 1 ) ( t , x , y ) e C 1 t 2 C 2 ( r 1 C 1 + C 2 e C 2 t + C 1 + C 2 r 1 e C 2 t ) γ d t d / 2 ; m 21 ( 1 ) ( t , x , y ) c e C 1 t 2 C 2 ( e C 2 t e C 2 t ) γ d t d / 2 ; m 12 ( 1 ) ( t , x , y ) b e C 1 t 2 C 2 ( e C 2 t e C 2 t ) γ d t d / 2 ; m 22 ( 1 ) ( t , x , y ) e C 1 t 2 C 2 ( C 1 + C 2 r 1 e C 2 t + C 1 C 2 r 1 e C 2 t ) γ d t d / 2 .

4. The Second Moments

In this section, we will study the behavior of the second moments of the number of subpopulations. To do this, we will essentially use the technique developed in the previous section, so we will omit some technical details.

4.1. Differential Equations for Moments

Let us denote m i j ( 2 ) ( t , x , y ) = E n i j 2 ( t , x , y ) and let the estimate Equation (14) be true. Our goal in this section is to obtain differential equations for m i j ( 2 ) ( t , x , y ) , i , j = 1 , 2 , which are similar to those obtained for m i j ( 1 ) ( t , x , y ) , i , j = 1 , 2 , in Section 3.1.
By taking the partial derivatives of the functions Φ i ( t , x , y ; z ) in Equation (11) over z 1 and z 2 we can get the following equations:
2 Φ i ( t , x , y ; z ) z j 2 = 2 E z 1 n i 1 ( t , x , y ; z ) z 2 n i 2 ( t , x , y ) z j 2 = E n i j ( t , x , y ) ( n i j ( t , x , y ) 1 ) z 1 n i 1 ( t , x , y ) 2 δ j ( 1 ) z 2 n i 2 ( t , x , y ) 2 δ j ( 2 )
Then, by fixing z 1 = z 2 = 1 in the last equation, we obtain
2 Φ i ( t , x , y ; z ) z j 2 | z = ( 1 , 1 ) = E n i j ( t , x , y ) ( n i j ( t , x , y ) 1 ) = m i j ( 2 ) ( t , x , y ) m i j ( 1 ) ( t , x , y ) .
Now, by differentiating both sides of Equation (12) from Lemma 1 over z j twice, we obtain:
3 Φ i ( t , x , y ; z ) z j 2 t = z j z j ( ( L i Φ i ( t , · , y ; z ) ) ( x ) + μ i ( 1 Φ i ( t , x , y ; z ) ) + F i ( Φ 1 ( t , x , y ; z ) , Φ 2 ( t , x , y ; z ) ) ) .
Taking here z = ( 1 , 1 ) and using the notation
m i j ( 2 ! ) ( t , x , y ) = m i j ( 2 ) ( t , x , y ) m i j ( 1 ) ( t , x , y )
we obtain the following representation of the left-hand side of Equation (52):
3 Φ i ( t , x , y ; z ) z j 2 t | z = ( 1 , 1 ) = t 2 Φ i ( t , x , y ; z ) z j 2 | z = ( 1 , 1 ) = m i j ( 2 ! ) ( t , x , y ) t
while the right-hand side of the same equation equals to:
z j z j ( ( L i Φ i ( t , · , y ; z ) ) ( x ) + μ i ( 1 Φ i ( t , x , y ; z ) ) + k + l 2 β i ( k , l ) ( Φ 1 k ( t , x , y ; z ) Φ 2 l ( t , x , y ; z ) Φ i ( t , x , y ; z ) ) ) | z = ( 1 , 1 ) = ( L i ( z j z j Φ i ( t , · , y ; z ) ) ( x ) μ i z j z j Φ i ( t , x , y ; z ) + k + l 2 β i ( k , l ) × ( k ( k 1 ) z j Φ 1 ( t , x , y ; z ) 2 Φ 1 k 2 ( t , x , y ; z ) Φ 2 l ( t , x , y ; z ) + k z j z j Φ 1 ( t , x , y ; z ) × Φ 1 k 1 ( t , x , y ; z ) Φ 2 l ( t , x , y ; z ) + 2 k l z j Φ 1 ( t , x , y ; z ) z j Φ 2 ( t , x , y ; z ) Φ 1 k 1 ( t , x , y ; z ) × Φ 2 l 1 ( t , x , y ; z ) + l ( l 1 ) z j Φ 2 ( t , x , y ; z ) 2 Φ 1 k ( t , x , y ; z ) Φ 2 l 2 ( t , x , y ; z ) + l z j z j Φ 2 ( t , x , y ; z ) Φ 1 k ( t , x , y ; z ) Φ 2 l 1 ( t , x , y ; z ) ( z j z j Φ i ( t , x , y ; z ) ) ) ) | z = ( 1 , 1 ) = ( L i m i j ( 2 ! ) ( t , · , y ) ) ( x ) μ i m i j ( 2 ! ) ( t , x , y ) + k + l 2 β i ( k , l ) ( k ( k 1 ) ( m 1 j ( 1 ) ( t , x , y ) ) 2 + k m 1 j ( 2 ! ) ( t , x , y ) + 2 k l m 1 j ( 1 ) ( t , x , y ) m 2 j ( 1 ) ( t , x , y ) + l ( l 1 ) ( m 2 j ( 1 ) ( t , x , y ) ) 2 + l m 2 j ( 2 ! ) ( t , x , y ) m i j ( 2 ! ) ( t , x , y ) ) .
By equating Equation (53) with (54) we get
m i j ( 2 ! ) ( t , x , y ) t = ( L i m i j ( 2 ! ) ( t , · , y ) ) ( x ) μ i m i j ( 2 ! ) ( t , x , y ) + k + l 2 β i ( k , l ) ( k m i j ( 2 ) ( t , x , y ) + k ( k 1 ) [ m 1 j ( 1 ) ( t , x , y ) ] 2 + l m 2 j ( 2 ) ( t , x , y ) + l ( l 1 ) [ m 2 j ( 1 ) ( t , x , y ) ] 2 + 2 k l m 1 j ( 1 ) ( t , x , y ) m 2 j ( 1 ) ( t , x , y ) k m 1 j ( 1 ) ( t , x , y ) l m 2 j ( 1 ) ( t , x , y ) ) k + l 2 β i ( k , l ) m i j ( 2 ! ) ( t , x , y ) ;
m i j ( 2 ! ) ( 0 , x , y ) 0 .
Finally, to obtain the differential equations for the second moments m i j ( 2 ) ( t , x , y ) , we add the term t m i j ( 1 ) ( t , x , y ) to each side of Equation (55). Then, we substitute the term m i j ( 1 ) ( t , x , y ) on the right side of the obtained expression by its representation (15). Then, the left side of the resulting equation takes the form t m i j ( 2 ) ( t , x , y ) , while the right side is equal to
( L i m i j ( 2 ! ) ( t , · , y ) ) ( x ) μ i m i j ( 2 ! ) ( t , x , y ) + k + l 2 β i ( k , l ) ( k ( k 1 ) ) m 1 j ( 1 ) ( t , x , y ) ) 2 + k m 1 j ( 2 ! ) ( t , x , y ) + 2 k l m 1 j ( 1 ) ( t , x , y ) m 2 j ( 1 ) ( t , x , y ) + l ( l 1 ) ( m 2 j ( 1 ) ( t , x , y ) ) 2 + l m 2 j ( 2 ! ) ( t , x , y ) m i j ( 2 ! ) ( t , x , y ) ) + ( L i m i j ( 1 ) ( t , · , y ) ) ( x ) μ i m i j ( 1 ) ( t , x , y ) + k + l 2 β i ( k , l ) ( k m 1 j ( 1 ) ( t , x , y ) + l m 2 j ( 1 ) ( t , x , y ) m i j ( 1 ) ( t , x , y ) ) = ( L i m i j ( 2 ) ( t , · , y ) ) ( x ) μ i m i j ( 2 ) ( t , x , y ) + k + l 2 β i ( k , l ) ( k m 1 j ( 2 ) ( t , x , y ) + l m 2 j ( 2 ) ( t , x , y ) + k m 1 j ( 2 ! ) ( t , x , y ) + 2 k l m 1 j ( 1 ) ( t , x , y ) m 2 j ( 1 ) ( t , x , y ) + l ( l 1 ) ( m 2 j ( 1 ) ( t , x , y ) ) 2 m i j ( 2 ) ( t , x , y ) )
Thus, we have proved the following lemma.
Lemma 3.
Let condition (14) hold. Then, the functions m i j ( 2 ) ( t , x , y ) , i , j = 1 , 2 , satisfy the differential equations
m i j ( 2 ) ( t , x , y ) t = ( L i m i j ( 2 ) ( t , · , y ) ) ( x ) μ i m i j ( 2 ) ( t , x , y ) + k + l 2 β i ( k , l ) ( k m 1 j ( 2 ) ( t , x , y ) + k ( k 1 ) [ m 1 j ( 1 ) ( t , x , y ) ] 2 + l m 2 j ( 2 ) ( t , x , y ) + l ( l 1 ) [ m 2 j ( 1 ) ( t , x , y ) ] 2 + 2 k l m 1 j ( 1 ) ( t , x , y ) m 2 j ( 1 ) ( t , x , y ) ) k + l 2 β i ( k , l ) m i j ( 2 ) ( t , x , y )
with the initial condition
m i j ( 2 ) ( 0 , x , y ) = δ i ( j ) δ x ( y ) .
Remark 10.
Similar considerations as in Remark 5 show that in this case Equation (57) can be treated as a linear differential equation in a Banach space whose right-hand side (for each t and y) is a linear bounded operator acting in any of the spaces l p ( Z d ) , p 1 . Therefore, for the same reasons as in Remark 5, we obtain that m i j ( 2 ) ( t , x , y ) (for each t and y) as a function of the variable x belongs to each of the spaces l p ( Z d ) , p 1 , and is thus bounded.
So, we have obtained the differential equations for the second moments m i j ( 2 ) ( t , x , y ) . In the next section, we will find the solutions for the obtained equations.

4.2. Solutions of Differential Equations for the Second Moments

In this section, (as in Section 3.2), we will consider the equations for m i j ( 2 ) ( t , x , y ) , i , j = 1 , 2 , which we obtained in Lemma 3, explicitly in terms of the Fourier transform. To do this, let us apply the Fourier transform (23) to the pair of functions ( m 1 j ( 2 ) ( t , x , y ) , m 2 j ( 2 ) ( t , x , y ) ) , j = 1 , 2 . Then, using the notation (27)–(30) from Section 3, we obtain
m ^ 1 j ( 2 ) ( t , θ , y ) t = a ( θ ) m ^ 1 j ( 2 ) ( t , θ , y ) + b m 2 j ( 2 ) ( t , θ , y ) + f 1 ( j ) ( t , θ , y ) ,
m ^ 2 j ( 2 ) ( t , θ , y ) t = c m ^ 1 j ( 2 ) ( t , θ , y ) + d ( θ ) m 2 j ( 2 ) ( t , θ , y ) + f 2 ( j ) ( t , θ , y ) ,
where
m ^ i j ( 2 ) ( 0 , θ , y ) = δ i ( j ) e i ( θ , y ) , i = 1 , 2 ,
and
f i ( j ) ( t , θ , y ) = k + l 2 β i ( k , l ) [ k ( k 1 ) m ^ 1 j ( 1 ) m ^ 1 j ( 1 ) ( t , θ , y ) + 2 k l m ^ 1 j ( 1 ) m ^ 2 j ( 1 ) ( t , θ , y ) + l ( l 1 ) m ^ 2 j ( 1 ) m ^ 2 j ( 1 ) ( t , θ , y ) ] .
Here,
( F G ) ( t , θ , y ) = 1 2 π d [ π , π ] d F ( t , θ v , y ) G ( t , v , y ) d v ,
i.e., ( F G ) ( t , θ , y ) is the convolution of the functions F ( t , θ , y ) and G ( t , θ , y ) with respect to the variable θ .
In what follows we will need the explicit form of the solution of the following linear differential equation:
d x ( t ) d t = k x ( t ) + f ( t ) .
This solution can be readily obtained by the method of variation of parameters, also known as the method of variation of constants:
x ( t ) = e k t x ( 0 ) + 0 t e k ( t s ) f ( s ) d s = e k t x ( 0 ) + 0 t f ( s ) e k s d s .
Case b = c = 0 .
Here, the functions f i ( j ) ( t , θ , y ) , i , j = 1 , 2 , in Equations (59) are identically zero, i.e.,
f i ( j ) ( t , θ , y ) 0 , for all i , j = 1 , 2 ,
while Equations (59) split into two independent homogeneous equations
m ^ 1 j ( 2 ) ( t , θ , y ) t = a ( θ ) m ^ 1 j ( 2 ) ( t , θ , y ) , m ^ 2 j ( 2 ) ( t , θ , y ) t = d ( θ ) m ^ 2 j ( 2 ) ( t , θ , y )
with the initial conditions given by Equation (61). Then, applying the formula (63) we can find the solutions of equations Equations (59):
m ^ 11 ( 2 ) ( t , θ , y ) = e i ( θ , y ) e a ( θ ) t ; m ^ 21 ( 2 ) ( t , θ , y ) = 0 ; m ^ 12 ( 2 ) ( t , θ , y ) = 0 ; m ^ 22 ( 2 ) ( t , θ , y ) = e i ( θ , y ) e d ( θ ) t .
Case b = 0 , c > 0 .
Here the functions f 1 ( j ) ( t , θ , y ) , j = 1 , 2 , in Equations (59) are identically zero, i.e.,
f 1 ( j ) ( t , θ , y ) 0 , for j = 1 , 2 .
m ^ 1 j ( 2 ) ( t , θ , y ) t = a ( θ ) m ^ 1 j ( 2 ) ( t , θ , y ) ; m ^ 2 j ( 2 ) ( t , θ , y ) t = c m ^ 1 j ( 2 ) ( t , θ , y ) + d ( θ ) m ^ 2 j ( 2 ) ( t , θ , y ) + f 2 ( j ) ( t , θ , y ) .
The solution of the first equation due to Formula (63) is clearly as follows:
m ^ 1 j ( 2 ) ( t , θ , y ) = m ^ 1 j ( 2 ) ( 0 , θ , y ) e a ( θ ) t = δ 1 ( j ) e i ( θ , y ) e a ( θ ) t ,
where the first equality follows from Equation (63), whereas the second equality follows from Equation (61).
To solve the second equation, we again use Formula (63) assuming x ( t ) = m ^ 2 j ( 2 ) ( t , θ , y ) , k = d ( θ ) and f ( t ) = c m ^ 1 j ( 2 ) ( t , θ , y ) + f 2 ( j ) ( t , θ , y ) . Then,
m ^ 2 j ( 2 ) ( t , θ , y ) = e d ( θ ) t m ^ 2 j ( 2 ) ( 0 , θ , y ) + 0 t c m ^ 1 j ( 2 ) ( s , θ , y ) + f 2 ( j ) ( s , θ , y ) e d ( θ ) s d s ,
where by Equation (61) we have
m ^ 11 ( 2 ) ( 0 , θ , y ) = e i ( θ , y ) ; m ^ 21 ( 2 ) ( 0 , θ , y ) = 0 for j = 1 , m ^ 12 ( 2 ) ( 0 , θ , y ) = 0 ; m ^ 22 ( 2 ) ( 0 , θ , y ) = e i ( θ , y ) for j = 2 .
Therefore, finally
m ^ 11 ( 2 ) ( t , θ , y ) = e i ( θ , y ) e a ( θ ) t ; m ^ 21 ( 2 ) ( t , θ , y ) = e d ( θ ) t 0 t e d ( θ ) s c e i ( θ , y ) e a ( θ ) s + f 2 ( 1 ) ( t , θ , y ) d s ; m ^ 12 ( 2 ) ( t , θ , y ) = 0 ; m ^ 22 ( 2 ) ( t , θ , y ) = e d ( θ ) t 0 t e d ( θ ) s f 2 ( 2 ) ( t , θ , y ) d s + e i ( θ , y ) .
Case b > 0 , c = 0 .
Similarly to the previous case, here the functions f 2 ( j ) ( t , θ , y ) , j = 1 , 2 , in Equations (59) are identically zero, i.e.,
f 2 ( j ) ( t , θ , y ) 0 , for j = 1 , 2 .
Then, Equations (59) also take the “triangle” form
m ^ 1 j ( 2 ) ( t , θ , y ) t = a ( θ ) m ^ 1 j ( 2 ) ( t , θ , y ) + b m ^ 2 j ( 2 ) ( t , θ , y ) + f 1 ( j ) ( t , θ , y ) ; m ^ 2 j ( 2 ) ( t , θ , y ) t = d ( θ ) m ^ 2 j ( 2 ) ( t , θ , y ) .
The solution of the second equation is equal to
m ^ 2 j ( 2 ) ( t , θ , y ) = m ^ 2 j ( 2 ) ( 0 , θ , y ) e d ( θ ) t = δ 2 ( j ) e i ( θ , y ) e d ( θ ) t .
where again the first equality follows from Equation (63) whereas the second equality follows from Equation (61).
To solve the first equation we apply the formula (63) with x ( t ) = m ^ 1 j ( 2 ) ( t , θ , y ) , k = a ( θ ) and f ( t ) = b m ^ 2 j ( 2 ) ( t , θ , y ) + f 1 ( j ) ( t , θ , y ) . Then,
m ^ 1 j ( 2 ) ( t , θ , y ) = e a ( θ ) t m ^ 1 j ( 2 ) ( 0 , θ , y ) + 0 t b m ^ 2 j ( 2 ) ( s , θ , y ) + f 1 ( j ) ( s , θ , y ) e a ( θ ) s d s .
where by Equation (61) we have
m ^ 11 ( 2 ) ( 0 , θ , y ) = e i ( θ , y ) ; m ^ 21 ( 2 ) ( 0 , θ , y ) = 0 for j = 1 , m ^ 12 ( 2 ) ( 0 , θ , y ) = 0 ; m ^ 22 ( 2 ) ( 0 , θ , y ) = e i ( θ , y ) for j = 2 .
Therefore,
m ^ 11 ( 2 ) ( t , θ , y ) = e a ( θ ) t 0 t e a ( θ ) s f 1 ( j ) ( t , θ , y ) d s + e i ( θ , y ) ; m ^ 21 ( 2 ) ( t , θ , y ) = 0 ; m ^ 12 ( 2 ) ( t , θ , y ) = e a ( θ ) t 0 t e a ( θ ) s b e i ( θ , y ) e d ( θ ) s + f 1 ( j ) ( t , θ , y ) d s ; m ^ 22 ( 2 ) ( t , θ , y ) = e i ( θ , y ) e d ( θ ) t .
Case b > 0 , c > 0 .
To address this case, we first recall the explicit form of the solution to the following linear differential equation:
d x ( t ) d t = A x ( t ) + f ( t ) ,
where A is a matrix (in our problem A is a 2 × 2 matrix) with time-independent (constant) entries and f ( t ) is a column-vector function.
The solution of Equation (64) can be easily obtained by the method of variation of parameters, see, e.g., [17] or any other textbook on linear differential equations:
x ( t ) = U ( t ) x ( 0 ) + 0 t U ( t s ) f ( s ) d s ,
where the matrix-function U ( t ) is the so-called “fundamental solution” of Equation (64). Let U ( t ) = exp { A t } . Then
It is known [17] that U ( t ) can be expressed as U ( t ) = exp { A t } . However, for us, the following representation for U ( t ) will be more useful:
U ( t ) = u 11 ( t ) u 12 ( t ) u 21 ( t ) u 22 ( t ) ,
where the vector-functions
u 1 ( t ) = u 11 ( t ) u 21 ( t ) , u 2 ( t ) = u 12 ( t ) u 22 ( t )
are solutions of the homogeneous system
d x ( t ) d t = A x ( t ) ,
satisfying the initial conditions, respectively,
u 1 ( 0 ) = 1 0 , u 2 ( 0 ) = 0 1 .
The components u i j ( t ) of the solutions u 1 ( t ) and u 2 ( t ) can be computed by using calculations from Remark 6 (case b > 0 , c > 0 ). By doing the needed computations, we obtain:
u 11 ( t ) = 1 λ 1 ( θ ) λ 2 ( θ ) ( a ( θ ) λ 2 ( θ ) ) e λ 1 ( θ ) t + ( λ 1 ( θ ) a ( θ ) ) e λ 2 ( θ ) t u 21 ( t ) = c λ 1 ( θ ) λ 2 ( θ ) e λ 1 ( θ ) t e λ 2 ( θ ) t u 12 ( t ) = b λ 1 ( θ ) λ 2 ( θ ) e λ 1 ( θ ) t e λ 2 ( θ ) t u 22 ( t ) = 1 λ 1 ( θ ) λ 2 ( θ ) ( λ 1 ( θ ) a ( θ ) ) e λ 1 ( θ ) t + ( λ 2 ( θ ) a ( θ ) ) e λ 2 ( θ ) t ,
where λ 1 , 2 ( θ ) are specified by Equation (39).
Therefore, from Equations (61) and (65) we obtain the following solutions for Equations (59) and (60):
m ^ 11 ( 2 ) ( t , θ , y ) = 1 λ 1 ( θ ) λ 2 ( θ ) 0 t ( a ( θ ) λ 2 ( θ ) ) f 1 ( 1 ) ( s , θ , y ) + b f 2 ( 1 ) ( s , θ , y ) e λ 1 ( θ ) ( t s ) d s + 1 λ 1 ( θ ) λ 2 ( θ ) 0 t ( λ 1 ( θ ) a ( θ ) ) f 1 ( 1 ) ( s , θ , y ) b f 2 ( 1 ) ( s , θ , y ) e λ 2 ( θ ) ( t s ) d s + 1 λ 1 ( θ ) λ 2 ( θ ) ( a ( θ ) λ 2 ( θ ) ) e λ 1 ( θ ) t + ( λ 1 ( θ ) a ( θ ) ) e λ 2 ( θ ) t e i ( θ , y ) ,
m ^ 21 ( 2 ) ( t , θ , y ) = 1 λ 1 ( θ ) λ 2 ( θ ) 0 t c f 1 ( 1 ) ( s , θ , y ) + ( λ 1 ( θ ) a ( θ ) ) f 2 ( 1 ) ( s , θ , y ) e λ 1 ( θ ) ( t s ) d s + 1 λ 1 ( θ ) λ 2 ( θ ) 0 t c f 1 ( 1 ) ( s , θ , y ) + ( λ 2 ( θ ) a ( θ ) ) f 2 ( 1 ) ( s , θ , y ) e λ 2 ( θ ) ( t s ) d s + c λ 1 ( θ ) λ 2 ( θ ) e λ 1 ( θ ) t e λ 2 ( θ ) t e i ( θ , y ) ,
m ^ 12 ( 2 ) ( t , θ , y ) = 1 λ 1 ( θ ) λ 2 ( θ ) 0 t ( a ( θ ) λ 2 ( θ ) ) f 1 ( 2 ) ( s , θ , y ) + b f 2 ( 2 ) ( s , θ , y ) e λ 1 ( θ ) ( t s ) d s + 1 λ 1 ( θ ) λ 2 ( θ ) 0 t ( λ 1 ( θ ) a ( θ ) ) f 1 ( 2 ) ( s , θ , y ) b f 2 ( 1 ) ( s , θ , y ) e λ 2 ( θ ) ( t s ) d s + b λ 1 ( θ ) λ 2 ( θ ) e λ 1 ( θ ) t e λ 2 ( θ ) t e i ( θ , y ) ,
m ^ 22 ( 2 ) ( t , θ , y ) = 1 λ 1 ( θ ) λ 2 ( θ ) 0 t c f 1 ( 2 ) ( s , θ , y ) + ( λ 2 ( θ ) a ( θ ) ) f 2 ( 2 ) ( s , θ , y ) e λ 1 ( θ ) ( t s ) d s + 1 λ 1 ( θ ) λ 2 ( θ ) 0 t c f 1 ( 1 ) ( s , θ , y ) + ( λ 2 ( θ ) a ( θ ) ) f 2 ( 1 ) ( s , θ , y ) e λ 2 ( θ ) ( t s ) d s + 1 λ 1 ( θ ) λ 2 ( θ ) ( λ 1 ( θ ) a ( θ ) ) e λ 1 ( θ ) t + ( λ 2 ( θ ) a ( θ ) ) e λ 2 ( θ ) t e i ( θ , y ) ,
where the functions f i ( j ) ( s , θ , y ) are defined by Equation (62).

5. Clustering for BRWs with Two Types of Particles with a Critical Reproduction Law

In this section, we consider BRWs with two types of particles satisfying the condition that the particle reproduction law at each lattice point is described by an irreducible critical two-type branching process and that the underlying random walks have finite variances of the jumps. We show that for particles of both types with the underlying recurrent random walks on Z d , a phenomenon of clustering of particles can be observed over long times, implying that the majority of particles are concentrated in some particular areas. We generalize the study started in [19] for BRW with one type of particles.

5.1. Degeneration Probability

In this section, based on the results for two-type critical branching processes we show that the probability of degeneracy of the subpopulation tends to 1 for the underlying recurrent random walk. We also show that, at the same time, subpopulations that are not degenerate exhibit linear growth in t at infinity.
Let us introduce some notation. Denote by D = ( d i j ) the matrix with the elements
d i j : = ( F i ( z 1 , z 2 ) + δ i ( 1 ) β 1 ( 1 , 0 ) z 1 + δ i ( 2 ) β 2 ( 0 , 1 ) z 2 ) z j | z = ( 1 , 1 ) , i , j = 1 , 2 ,
where F i ( z 1 , z 2 ) are the generating functions defined in Equation (2). We also define the densities of second factorial moments of F i ( z 1 , z 2 ) (cf. Equation (4) in [8](Ch. 4, § 7)) as
b j k ( i ) : = 2 F i ( z 1 , z 2 ) z j z k | z = ( 1 , 1 ) , i , j , k = 1 , 2 ,
and assume that condition (14) holds, so that d i j and b j k ( i ) are finite for all i , j , k = 1 , 2 .
Recall the following definition from [8] (Ch. 4, § 5, Def. 2):
Definition 1.
A matrix C = ( c i j ) , i , j = 1 , , n is called reducible if there exist two subsets S 1 , S 2 { 1 , , n } and S 1 S 2 = such that c i j = 0 for all i S 1 , j S 2 . Otherwise, the matrix C is called irreducible.
Definition 2.
A branching process is called irreducible [8](Ch. 4, § 6, Th. 2) if the matrix D is irreducible.
Now, note that due to (7), m i j ( 1 ) ( t , x , y ) m i j ( 1 ) ( t , x y , 0 ) and then
y Z d m i j ( 1 ) ( t , x , y ) = y Z d m i j ( 1 ) ( t , x y , 0 ) = z Z d m i j ( 1 ) ( t , z , 0 ) , i , j = 1 , 2 ,
where the sum on the right-hand side is finite due to Remark 5. Then, the matrix D ( t , x ) : = ( d i j ( t , x ) ) with elements
d i j ( t , x ) : = E y Z d n i j ( t , x , y ) = y Z d m i j ( 1 ) ( t , x , y ) , i , j = 1 , 2 ,
is well-defined, i.e., its elements d i j ( t , x ) are finite. Moreover, the relations (66) prove that the quantity d i j ( t , x ) does not indeed depend on the spatial coordinate x, i.e.,
d i j ( t , x ) = d i j ( t ) for all x Z d .
Then, according to [8](Ch. 4, § 7, Th. 5), we have
d i j ( t ) = u i v j e r t + o ( e r 1 t ) for t , i , j = 1 , 2 ,
where r is the Perron root (see [8](Ch. 4, § 5, Def. 6)) of the matrix D and r 1 is some quantity satisfying r 1 < r . We denote by
u = ( u 1 , u 2 ) , v = ( v 1 , v 2 )
the left and right eigenvectors, respectively, corresponding to the eigenvalue r of D.
Definition 3.
An irreducible branching process is called critical [8](Ch. 4, § 7, Def. 2) if r = 0 and
i = 1 2 j = 1 2 k = 1 2 v i b j k ( i ) u j u k > 0 .
Let
n i ( t , x ) = j = 1 2 y Z d n i j ( t , x , y )
be the number of particles in a subpopulation at time t generated by a particle of the i-th type provided that at the initial moment of time the particle was at the point x.
Remark 11.
Evaluate the quantity n i ( t , x ) , i = 1 , 2 , at the time moment t + d t . Let n i j ( t , x ) , j = 1 , 2 be the number of offsprings of type j generated by a single particle of type i, which at the time moment t = 0 was located at the point x Z d , so that n i ( t , x ) = n i 1 ( t , x ) + n i 2 ( t , x ) . Assume G i ( t , x ; z ) = E z n i ( t , x ) . Then, by using the Kolmogorov forward equation, we obtain the following relations:
G i ( t + d t , x ; z ) = E z n i ( t + d t , x ) = E z n i ( t + d t , x ) [ k + l 2 β 1 ( k , l ) n i 1 ( t , x ) + β 2 ( k , l ) n i 2 ( t , x ) z k + l d t + μ 1 n i 1 ( t , x ) + μ 2 n i 2 ( t , x ) z 1 d t + ϰ 1 n i 1 ( t , x ) + ϰ 2 n i 2 ( t , x ) d t + ( 1 ϰ 1 n i 1 ( t , x ) d t ϰ 2 n i 2 ( t , x ) d t k + l 2 ( β 1 ( k , l ) n i 1 ( t , x ) + β 2 ( k , l ) n i 2 ( t , x ) ) z k + l d t ) μ 1 n i 1 ( t , x ) d t μ 2 n i 2 ( t , x ) d t + o ( d t ) ] = E z n i ( t + d t , x ) [ k + l 2 β 1 ( k , l ) n i 1 ( t , x ) + β 2 ( k , l ) n i 2 ( t , x ) z k + l d t + μ 1 n i 1 ( t , x ) + μ 2 n i 2 ( t , x ) z 1 d t + ( 1 k + l 2 ( β 1 ( k , l ) n i 1 ( t , x ) + β 2 ( k , l ) n i 2 ( t , x ) ) z k + l d t ) μ 1 n i 1 ( t , x ) d t μ 2 n i 2 ( t , x ) d t + o ( d t ) ]
From these relations, it can be seen that the behavior of the process n i ( t , x ) depends only on its “branching component” and the evolution of the process coincides with the evolution of the branching process with continuous time treated in [8]. For this reason, we apply the results of [8] in the following.
Remark 12.
Note that from Remark 2, we have for all k Z + :
P ( n i ( t , x ) = k ) = P j = 1 2 y Z d n i j ( t , x , y ) = k = P j = 1 2 y Z d n i j ( t , 0 , y x ) = k = P ( n i ( t , 0 ) = k ) .
Recall that the branching process under consideration is assumed to be critical and irreducible. In this case, [8](Ch. 6, § 3, Th. 4) implies that the probability of non-degeneration of a subpopulation has the following asymptotic behavior for all x Z d as t :
P n i ( t , x ) > 0 = P n i ( t , 0 ) > 0 = c i t + o 1 t 0 , P n i ( t , x ) = 0 = P n i ( t , 0 ) = 0 = 1 c i t + o 1 t 1 ,
where c i is a constant. Thus, the probability of degeneration P n i ( t , x ) = 0 of the subpopulation tends to 1 for all x Z d as t .
Now, we will estimate the conditional mathematical expectation
E y Z d n i j ( t , x , y ) | n i ( t , x ) > 0
which is the main object of the study in Section 5.1. By the definition of conditional expectation we have for all x Z d
E y Z d n i j ( t , x , y ) | n i ( t , x ) > 0 = E y Z d n i j ( t , x , y ) I { n i ( t , x ) > 0 } P ( n i ( t , x ) > 0 ) ,
where I { A } is the indicator of the set A. Note that from Equation (69) it follows that
n i ( t , x ) = 0 y Z d n i j ( t , x , y ) = 0 for j = 1 , 2 .
Then, with the usage of the formula of total probability we have
E y Z d n i j ( t , x , y ) = E y Z d n i j ( t , x , y ) I { n i ( t , x ) > 0 } + I { n i ( t , x ) = 0 } = E y Z d n i j ( t , x , y ) I { n i ( t , x ) > 0 } + E y Z d n i j ( t , x , y ) I { n i ( t , x ) = 0 } = E y Z d n i j ( t , x , y ) I { n i ( t , x ) > 0 } .
Thus, from Equation (67) we obtain
d i j ( t ) = d i j ( t , x ) = E y Z d n i j ( t , x , y ) = E y Z d n i j ( t , x , y ) | n i ( t , x ) > 0 P n i ( t , x ) > 0 .
At the same time, substituting r = 0 in Equation (68) we obtain d i j ( t ) = u i v j + o ( 1 ) , whence, denoting C i j : = u i v j c i = const and using Equation (70) we get
E y Z d n i j ( t , x , y ) | n i ( t , x ) > 0 = u i v j + o ( 1 ) c i / t + o ( 1 / t ) = C i j t + o ( t ) as t .
In virtue of Equation (70) we have that, in the case when t , the probability of degeneration of the subpopulation P n i ( t , x ) = 0 tends to 1. At the same time, due to Equation (71) those subpopulations that are not degenerate have a linear growth in t at infinity.

5.2. Clustering

In this section, we study the effect of clustering at each point for an irreducible critical branching process under the condition that the tail of a random walk is superexponentially light, i.e., for each λ R d , i = 1 , 2 , the following condition holds:
v Z d e ( λ , v ) a i ( v ) < .
By p i ( t , x , y ) , we denote the transition probability of the random walk on Z d defined by L i , i = 1 , 2 , see Equation (3). From [20] (Ch. 3, § 2) it follows that p i ( t , x , y ) is the solution of the Cauchy problem
p i ( t , x , y ) t = ( L i p i ( t , · , y ) ) ( x ) , p i ( 0 , x , y ) = δ x ( y ) .
Note that here, as was shown in Remark 9, p i ( t , x , y ) = p i ( t , x y , 0 ) = p i ( t , 0 , y x ) , which follows from the property of spatial homogeneity of the process under consideration.
Denote y x = s , then from [13] (Eq. (4.7)) we have for s = O ( t ) the following equality:
p i ( t , 0 , s ) = e ( B i 1 s , s ) / ( 2 t ) ( 2 π t ) d / 2 det B i + o ( t d / 2 ) ,
where B i = b i ( k j ) k , j , i = 1 , 2 , is the matrix with the elements
b i ( k j ) = v a i ( v ) v k v j , v = ( v 1 , , v d ) , k , j = 1 , , d .
For d = 1 , the one-dimensional matrix B i is as follows: B i = b i ( 11 ) . We denote b i : = b i ( 11 ) , then Equation (72) takes the form
p i ( 1 ) ( t , 0 , s ) = e s 2 / ( 2 b i t ) 2 π b i t + o ( 1 / t ) .
Consider the probability that a particle located at point 0 Z d will jump no further than a distance C t , C > 0 is some constant. Then,
| s | < C t p i ( 1 ) ( t , 0 , s ) = p i ( 1 ) ( t , 0 , 0 ) + 2 s N , s < C t p i ( 1 ) ( t , 0 , s ) > p i ( 1 ) ( t , 0 , 0 ) + 2 1 C t e τ 2 / ( 2 b i t ) 2 π b i t d τ = 2 0 C t e τ 2 / ( 2 b i t ) 2 π b i t d τ + o ( 1 ) .
Note that in the last equality, under the sign of integral is the function f ( τ ) such that
f ( τ ) = e τ 2 / ( 2 b i t ) 2 π b i t .
Function f ( τ ) is the density function of a random variable, which has normal distribution with mean 0 and variance b i t . Therefore, choosing appropriate constant C > 0 , we can make the quantity
2 0 C t e τ 2 / ( 2 b i t ) 2 π b i t d τ
to be arbitrarily close to 1. Hence, for every ε > 0 there exists C > 0 such that
| y x | < C t p i ( 1 ) ( t , x , y ) > 1 ε .
Thus, as t a particle will move at a distance of no more than C t with probability arbitrarily close to 1.
Turn to the case when Z d , d = 2 . Then, from Equation (72) takes the form
p i ( 2 ) ( t , 0 , s ) = e ( B i 1 s , s ) / ( 2 t ) 2 π t det B i + o ( 1 / t ) ,
As in the previous case, consider the probability that a particle located at point 0 Z d will jump no further than a distance C t . Consequently,
| s | < C t p i ( 2 ) ( t , 0 , s ) = p i ( 2 ) ( t , 0 , 0 ) + 4 s 1 N , s 2 Z + , s 1 2 + s 2 2 < C t p i ( 2 ) ( t , 0 , s ) > p i ( 2 ) ( t , 0 , 0 ) + 4 τ 1 1 , τ 2 0 , τ 1 2 + τ 2 2 < C t e ( B i 1 τ , τ ) / ( 2 t ) 2 π t det B i d τ > p i ( 2 ) ( t , 0 , 0 ) + 4 τ 1 0 , τ 2 0 , τ 1 2 + τ 2 2 < C t e ( B i 1 τ , τ ) / ( 2 t ) 2 π t det B i d τ 4 0 τ 1 1 , 0 τ 2 C t e ( B i 1 τ , τ ) / ( 2 t ) 2 π t det B i d τ = τ 1 2 + τ 2 2 < C t e ( B i 1 τ , τ ) / ( 2 t ) 2 π t det B i d τ + o ( 1 ) .
Note that in the last equality, under the sign of integral is the function f ( τ 1 , τ 2 ) such that
f ( τ 1 , τ 2 ) = e ( B i 1 τ , τ ) / ( 2 t ) 2 π t det B i
Function f ( τ 1 , τ 2 ) is the density function of a random variable which has two-dimensional normal distribution with mean vector ( 0 , 0 ) and covariance matrix B i t .
Therefore, choosing an appropriate constant C we can get
τ 1 2 + τ 2 2 < C t e ( B i 1 τ , τ ) / ( 2 t ) 2 π t det B i d τ
arbitrarily close to 1. Hence, due to Equation (74) for every ε > 0 , there exists C > 0 , such that
| y x | < C t p i ( 2 ) ( t , x , y ) > 1 ε .
Thus, as t , a particle will move with probability arbitrarily close to 1 over a distance no greater than C t . This result for the lattice dimension d = 2 is similar to that obtained in Equation (73) for the lattice dimension d = 1 .
Let us now consider the situation where there is one particle of type i at each point at the initial time. We denote the set of odd positive integers by N 1 and the set of even positive integers by N 2 . Let us consider a given particle at time t and all its progenitors up to the initial time.
We consider a particle at time t and a sequence (evolutionary lineage ) K consisting of all its m progenitors (from the initial particle to the immediate parent) and the particle itself, K = ( k 1 , , k m , k m + 1 ) , m 0 . If m > 0 (i.e., the particle is not included in the set of initial particles on the lattice), then we select s from the sequence of indices [ 2 , , m + 1 ] such that t y p e ( k s ) t y p e ( k s 1 ) , where t y p e ( k ) denotes the type of the particle k. We denote the sequence of selected indices by S = ( s 1 , , s n ) , n m + 1 . If the sequence S turns out to be empty (i.e., no type changes were observed in the evolutionary lineage considered), then we add to it the index s 1 : = m + 1 , then n = 1 . We denote by h ( k ) the lifetime of the particle k and construct the sequence τ = ( τ 1 , , τ n ) , where τ 1 = i = 1 S [ 1 ] h ( k i ) and τ j = i = 1 S [ j ] h ( k i ) τ j 1 , j = 2 , , n . Note that i = 1 n τ i = t . Assuming that the evolutionary lineage started with a particle of type 1, we obtain that in the time intervals ( 0 , τ 1 ) , ( τ 2 , τ 3 ) , the particles of this evolution lineage walk on the lattice under the action L 1 and on the time intervals ( τ 1 , τ 2 ) , ( τ 3 , τ 4 ) , ⋯ under the action of L 2 .
We denote by p ( τ , x , y ) the probability for a particle to move from a point x to y on Z d in time τ . Due to the Kolmogorov–Chapman equation, for n 2 we obtain
p ( τ , x , y ) = x i Z d 1 i n 1 p 1 ( τ 1 , x , x 1 ) i = 2 n 1 p s ( i ) ( τ i , x i 1 , x i ) p s ( n ) ( τ n , x n 1 , y ) ,
where s ( i ) = 1 for i N 1 and s ( i ) = 2 for i N 2 . This representation will be needed in the following lemma.
Lemma 4.
Let t 1 : = i N 1 τ i be the total time spent by a particle in the first state and t 2 : = i N 2 τ i be the total time spent by the same particle in the second state. Then,
p ( τ , x , y ) = x Z d p 1 ( t 1 , x , x ) p 2 ( t 2 , x , y ) .
Proof. 
Let us show first that
p ( τ , x , y ) = p ( ( τ 1 , , τ n 2 + τ n , τ n 1 ) , x , y ) .
For the proof, due to Equation (75), it is enough to consider the following sequence of relations:
p ( τ , x , y ) = x i Z d 1 i n 1 ( p 1 ( τ 1 , x , x 1 ) i = 2 n 3 p s ( i ) ( τ i , x i 1 , x i ) p s ( n 2 ) ( τ n 2 , x n 3 , x n 2 ) × p s ( n 1 ) ( τ n 1 , x n 2 , x n 1 ) p s ( n ) ( τ n , x n 1 , y ) ) = x i Z d 1 i n 2 x Z d p 1 ( τ 1 , x , x 1 ) × i = 2 n 3 p s ( i ) ( τ i , x i 1 , x i ) p s ( n 2 ) ( τ n 2 , x n 3 , x n 2 ) p s ( n 1 ) ( τ n 1 , x , y ) p s ( n ) ( τ n , x n 2 , x ) = x i Z d 1 i n 3 x Z d p 1 ( τ 1 , x , x 1 ) i = 2 n 3 p s ( i ) ( τ i , x i 1 , x i ) p s ( n 1 ) ( τ n 1 , x , y ) × x n 2 p s ( n 2 ) ( τ n 2 , x n 3 , x n 2 ) p s ( n ) ( τ n , x n 2 , x ) = x i Z d 1 i n 3 x Z d p 1 ( τ 1 , x , x 1 ) × i = 2 n 3 p s ( i ) ( τ i , x i 1 , x i ) p s ( n 2 ) ( τ n 2 + τ n , x n 3 , x ) p s ( n 1 ) ( τ n 1 , x , y ) = x i Z d 1 i n 2 p 1 ( τ 1 , x , x 1 ) i = 2 n 3 p s ( i ) ( τ i , x i 1 , x i ) p s ( n 2 ) ( τ n 2 + τ n , x n 3 , x n 2 ) × p s ( n 1 ) ( τ n 1 , x n 2 , y ) = p ( ( τ 1 , , τ n 2 + τ n , τ n 1 ) , x , y ) .
Consecutively, applying the Formula (77) n 2 times, we obtain
p ( τ , x , y ) = p ( ( τ 1 , , τ n 2 + τ n , τ n 1 ) , x , y ) = p ( ( τ 1 , , τ n 3 + τ n 1 , τ n 2 + τ n ) , x , y ) = = p ( ( τ 1 + τ 3 + + τ 2 [ ( n 1 ) / 2 ] + 1 , τ 2 + τ 4 + + τ 2 [ n / 2 ] ) , x , y ) ,
whence the assertion of the lemma follows. □
Now, we will apply Equation (76) from Lemma 4 for understanding how far the particles can go from the initial position of their initial progenitor by some time t when t .
In case of t and t i , i = 1 , 2 , for every ε > 0 , there exists C i > 0 such that
| y x | < C i t p i ( t i , x , y ) | y x | < C i t i p i ( t i , x , y ) > 1 ε .
In case of t and t i < C , i = 1 , 2 , and for every ε > 0 , we have:
| y x | < t p i ( t i , x , y ) > 1 ε .
Thus, for t , ε > 0 , τ : i τ i = t ,
| y x | < ( C 1 + C 2 + 1 ) t p ( τ , x , y ) = | y x | < ( C 1 + C 2 + 1 ) t x Z d p 1 ( t 1 , x , x ) p 2 ( t 2 , x , y ) > ( 1 ε ) 2 .
For d = 1 , the distance between the start points of subpopulations that did not degenerate by the time t has a geometric distribution with an average value of t c i + o ( t ) Equation (70) and non-degenerate subpopulations have particles at a distance from the initial particle of the order of no more than t with a probability arbitrarily close to 1, see Equation (73).
Thus, particle clusters with length of order t are separated by empty intervals with length of order t.
Let us turn to the case d = 2 . Choose two functions, ν ( t ) and f ( t ) , such that ν ( t ) and ν ( t ) / t 0 for t , and f ( t ) = O 1 t ν ( t ) . Now, consider the square of the lattice with side t ν ( t ) e ν ( t ) c i and divide it into cells with side t ν ( t ) c i ; then the number of cells will be e ν ( t ) . We call a cell degenerate at time t if it does not contain the starting points of populations that do not degenerate at time t. Then, the probability that for t all subpopulations of cell degenerate is
P d e g ( t ) = 1 c i t + f ( t ) t ν ( t ) c i = e ν ( t ) + O ( 1 ) C e ν ( t )
with some constant C > 0 . The probability of the existence of a cell, whose all subpopulations of the initial particles are degenerated, is
1 ( 1 P d e g ( t ) ) e ν ( t ) 1 1 e C .
Non-degenerate subpopulations have particles at a distance from the initial particle of the order of no more than t t ν ( t ) c i .
Therefore, by the time t , we get particle-free circles with a radius of the order of t ν ( t ) c i at a distance of the order of t ν ( t ) e ν ( t ) c i .
Thus, we have proved that both in the case of dimension d = 1 and d = 2 , the effect of clustering of particle subpopulations takes place.

6. Example

One of the assumptions of the model we presented in Section 2 was the fact that particles cannot change their type over time (see Remark 1). Here, we will consider an example where particles of the first type can become particles of the second type. This example can describe the distribution of a virus.
In Section 6.1, we will describe a new model of BRW with two types of particles, where the particles can change their types. We will use the designations from Section 2. In Section 6.2, we study the first moments for the number of particles of type i = 1 , 2 at each lattice point. In Section 6.3, we obtain the solutions for the second moment for the number of particles of the first type at each lattice point in the more general case. In Section 6.4, we study the effect of intermittency in the simplest case for the number of particles of the first type. In Section 6.5, we obtain the differential equation for the second moment for the number of particles of the second type at each lattice point and find its asymptotic behavior as t .

6.1. Description of the Model

Consider a new model of BRW with two types of particles. Here, we will study the behavior of the processes N i ( t , x ) , i = 1 , 2 defined in (1). We call the particles of the first type infected and the particles of the second type particles with immunity. Let us denote by r the intensity to build up immunity for an infected particle during the small time d t . This means that the particle can change type with probability r d t + o ( d t ) . Moreover, we assume that there was only one infected particle on the lattice at time t = 0 . Without limiting generality, we can assume that this initial particle was at the origin. Then, N 1 ( 0 , x ) = δ 0 ( x ) , N 2 ( 0 , x ) 0 for all x Z d . Let b n , n 2 be the intensity to infect n 1 new particles. Here we assume that there are enough healthy particles at each point of the lattice to get sick. We are also interested in studying the moments of the number of particles of both types. In the previous notation, we say that b n = β 1 ( n , 0 ) , β 2 ( k , l ) 0 for all k , l : k + l 2 .
In what follows, due to the fact that particles can change their types, we use the forward Kolmogorov equations approach to obtain the differential equations for the moments of N i ( t , x ) , i = 1 , 2 . The derivation of the forward Kolmogorov equations is based on the following representation:
N 1 ( t + d t , x ) = N 1 ( t , x ) + ξ ( d t , x ) , N 2 ( t + d t , x ) = N 2 ( t , x ) + ψ ( d t , x ) ,
where ξ ( d t , x ) and ψ ( d t , x ) are discrete random variables with the following distributions:
ξ ( d t , x ) = n 1 with probability b n N 1 ( t , x ) d t + o ( d t ) , n 3 , 1 with probability b 2 N 1 ( t , x ) d t + ϰ 1 z 0 a 1 ( z ) N 1 ( t , x + z ) d t + o ( d t ) , 1 with probability μ 1 N 1 ( t , x ) d t + ϰ 1 N 1 ( t , x ) d t + r N 1 ( t , x ) d t + o ( d t ) , 0 with probability 1 n 3 b n N 1 ( t , x ) d t ( β 2 + μ 1 + ϰ 1 ) N 1 ( t , x ) d t r N 1 ( t , x ) d t z 0 a 1 ( z ) N 1 ( t , x + z ) d t + o ( d t ) ;
ψ ( d t , x ) = 1 with probability ϰ 2 z 0 a 2 ( z ) N 2 ( t , x + z ) d t + r N 1 ( t , x ) d t + o ( d t ) , 1 with probability μ 2 N 2 ( t , x ) d t + ϰ 2 N 2 ( t , x ) d t + o ( d t ) , 0 with probability 1 ( μ 2 + ϰ 2 N 2 ( t , x ) d t r N 1 ( t , x ) d t z 0 a 2 ( z ) N 2 ( t , x + z ) d t + o ( d t ) .
We are going to study the first two moments for the random variables N i ( t , x ) , i = 1 , 2 . In the next section, we pay attention to the first moments.

6.2. The First Moments

In this section, we consider the first moments for N i ( t , x ) , i = 1 , 2 . We obtain the differential equations for them and find their explicit solutions in terms of the Fourier transform (23). We also obtain their asymptotic behavior in certain cases. Define the first moments R i ( t , x ) : = E N i ( t , x ) . Note that R i ( 0 , x ) = δ 1 ( i ) δ 0 ( x ) . Let F t be the sigma-algebra of events up to and including t. Note that ξ ( d t , x ) (and ψ ( d t , y ) ) and F t are independent.
Derive the differential equations for these functions:
R 1 ( t + d t , x ) = E N 1 ( t + d t , x ) = E [ N 1 ( t , x ) + ξ ( d t , x ) ] = R 1 ( t , x ) + E [ E [ ξ ( d t , x ) | F t ] ] = R 1 ( t , x ) + n = 2 ( n 1 ) b n R 1 ( t , x ) d t μ 1 R 1 ( t , x ) d t + ( L 1 R 1 ( t , · ) ) ( x ) d t r M 1 ( t , x ) d t + o ( d t ) .
Let β = n = 2 ( n 1 ) b n . Then, as d t 0 the differential equation for R 1 ( t , x ) is
R 1 ( t , x ) t = ( β μ 1 r ) R 1 ( t , x ) + ( L 1 R 1 ( t , · ) ) ( x ) , R 1 ( 0 , x ) = δ 0 ( x ) .
The same technique helps to find the differential equation for R 2 ( t , x ) :
R 2 ( t + d t , x ) = E N 2 ( t + d t , x ) = E [ N 2 ( t , x ) + ψ ( d t , x ) ] = R 2 ( t , x ) + E [ E [ ψ ( d t , x ) | F t ] ] = R 2 ( t , x ) + ( L 2 R 2 ( t , · ) ) ( x ) d t μ 2 R 2 ( t , x ) d t + r R 1 ( t , x ) d t + o ( d t ) .
From this we get as d t 0 :
R 2 ( t , x ) t = ( L 2 R 2 ( t , · ) ) ( x ) μ 2 R 2 ( t , x ) + r R 1 ( t , x ) , R 2 ( 0 , x ) = 0 .
Firstly, solve equation Equation (78). Write again the equation for this function:
R 1 ( t , x ) t = ( β μ 1 r ) R 1 ( t , x ) + ( L 1 M 1 ( t , · ) ) ( x ) , R 1 ( 0 , x ) = δ 0 ( x ) .
To solve the equation apply the discrete Fourier transform Equation (23). Then,
R ^ 1 ( t , θ ) t = ( β μ 1 r ) R ^ 1 ( t , θ ) + ϰ 1 a ^ 1 ( θ ) R ^ 1 ( t , θ ) , R ^ 1 ( 0 , x ) = 1 .
The solution has the form:
R ^ 1 ( t , θ ) = e ( β μ 1 r ) t e ϰ 1 a ^ 1 ( θ ) t
Remark 13.
For convenience, we denote the inverse Fourier transform (24) of a function f ( θ ) by f ^ ( θ ) ˜ .
Therefore,
R 1 ( t , x ) = e ( β μ 1 r ) t e ϰ 1 a ^ 1 ( θ ) t ˜ .
We find out the solution of Equation (79). Substituting the solution for R 1 ( t , x ) from Equation (80) in Equation (79), we obtain:
R 2 ( t , x ) t = ( L 2 R 2 ( t , · ) ) ( x ) μ 2 R 2 ( t , x ) + r e ( β μ 1 r ) t e ϰ 1 a ^ 1 ( θ ) t ˜ , R 2 ( 0 , x ) = 0 .
To obtain the solution of differential Equation (81), we consider two cases:
β μ 1 r = μ 2 , β μ 1 r μ 2 .
Case β μ 1 r = μ 2 .
Apply the discrete Fourier transform (23) and the variation of constants methods to solve the Equation (81).
If ϰ 1 a ^ 1 ( θ ) = ϰ 2 a ^ 2 ( θ ) , then
R ^ 2 ( t , θ ) = r t e ( ϰ 2 a ^ 2 ( θ ) μ 2 ) t .
Consequently,
R 2 ( t , x ) = r t e μ 2 t e ϰ 2 a ^ 2 ( θ ) t ˜ .
If ϰ 1 a ^ 1 ( θ ) ϰ 2 a ^ 2 ( θ ) = : d > 0 , then
R ^ 2 ( t , θ ) = r d e d t 1 e μ 2 t e ϰ 2 a ^ 2 ( θ ) t .
Then,
R 2 ( t , x ) = r e ( β μ 1 r ) t e ϰ 1 a ^ 1 ( θ ) t d ˜ r e μ 2 t e ϰ 2 a ^ 2 ( θ ) t d ˜ .
Case β μ 1 r μ 2 .
Here, the solution of differential Equation (81) has the form:
If ϰ 1 a ^ 1 ( θ ) + β μ 1 r = ϰ 2 a ^ 2 ( θ ) μ 2 , then
R ^ 2 ( t , θ ) = r t e μ 2 t e ϰ 2 a ^ 2 ( θ ) t ;
If ϰ 1 a ^ 1 ( θ ) + β μ 1 r ϰ 2 a ^ 2 ( θ ) + μ 2 = : d > 0 , then
R ^ 2 ( t , θ ) = r d e d t 1 e μ 2 t e ϰ 2 a ^ 2 ( θ ) t .
Remark 14.
Find out the asymptotic behavior for the first moments R i ( t , x ) , i = 1 , 2 in a particular case. Assume that generators L i , i = 1 , 2 defined in Equation (3) are equal, so that L 1 = L 2 . Additionally, consider the case when underlying random walks have a finite variance of jumps, so that Equation (48) is true. Then, from Equation (49) we have, for each x Z d ,
[ π , π ] d e ϰ a ( θ ) ^ t cos ( ( θ , x ) ) d θ γ d t d / 2 ,
where γ d is specified by Equation (50). With the usage of (82) we get for R ^ i ( t , θ ) , i = 1 , 2 obtained above we have, for each x Z d , as t :
R 1 ( t , x ) γ d t d / 2 , R 2 ( t , x ) r γ d t ( d 2 ) / 2 ,
when β μ 1 r = 0 and μ 2 = 0 , and
R 1 ( t , x ) e A t γ d t d / 2 , R 2 ( t , x ) r A e A t 1 γ d t d / 2 ,
when A = β μ 1 r 0 and μ 2 = 0
So, we have found the first moments for both types of a particle. In the next sections, we are going to get the explicit form of the second moments.

6.3. The Second Moment for N 1 ( t , x )

Here, we will find out the asymptotic behavior for the second moments of the random variable N 1 ( t , x ) .
To derive the second moment for N 1 ( t , x ) , we consider a more general problem. Let N 1 ( t , x , y ) be the number of particles of the first type at time t at point y Z d generated by the single particle of the first type located at time t = 0 at the site x Z d . The initial condition for N 1 ( t , x , y ) is N 1 ( 0 , x , y ) = δ x ( y ) . If we use the designations from Section 6.1, if N 1 ( t , x ) = N 1 ( t , 0 , x ) . Here, we will use the method of backward Kolmogorov equations. Define the generating function for the random variable N 1 ( t , x , y ) as
F ( t , x , y ; z ) = E e z N 1 ( t , x , y ) ,
where z R , t 0 . From this, we get the following lemma.
Lemma 5.
The generating function F ( t , x , y ; z ) specified by Equation (83) satisfies the differential equation:
F ( t , x , y ; z ) t = ( L 1 , x F ( t , · , y ; z ) ) ( x ) + f F ( t , x , y ; z ) + r ( 1 F ( t , x , y ; z ) ) ; F ( 0 , x , y ; z ) = e z δ x ( y ) .
Proof. 
Consider the generating function F ( t , x , y ; z ) at the time moment t + d t . Then,
F ( t + d t , x , y ; z ) = E e z N 1 ( t + d t , x , y ) = E E e z N 1 ( d t , x , x ) N 1 ( t , x , y ) u 0 e z N 1 ( d t , x , x + u ) N 1 ( t , x + u , y ) | F t = E e z N 1 ( t , x , y ) [ n 2 e z ( n 1 ) N 1 ( t , x , y ) b n d t + e z N 1 ( t , x , y ) ( μ 1 + r ) d t + u 0 e z N 1 ( t , x + u , y ) ϰ 1 a 1 ( u ) d t + 1 n 2 b n + r + μ 1 + ϰ 1 d t + o ( d t ) ] = F ( t , x , y ; z ) + ( L 1 , x F ( t , · , y ; z ) ) ( x ) d t + f F ( t , x , y ; z ) d t + r 1 F ( t , x , y ; z ) d t + o ( d t ) ,
where f ( s ) = μ 1 + s ( μ 1 n 2 b n ) + n 2 b n s n and
( L i , x Ψ ( t , · , y ) ) ( x ) = ϰ i v 0 a i ( v ) [ Ψ ( t , x + v , y ) Ψ ( t , x , y ) ] , i = 1 , 2 .
Then,
F ( t + d t , x , y ; z ) F ( t , x , y ; z ) = ( L 1 , x F ( t , · , y ; z ) ) ( x ) d t + f F ( t , x , y ; z ) d t + r 1 F ( t , x , y ; z ) d t + o ( d t ) .
Therefore, as d t 0
F ( t , x , y ; z ) t = ( L 1 , x F ( t , · , y ; z ) ) ( x ) + f F ( t , x , y ; z ) + r ( 1 F ( t , x , y ; z ) ) .
The initial condition for the latter equation follows from Equation (83):
F ( 0 , x , y ; z ) = E e z N 1 ( 0 , x , y ) = E e z δ x ( y ) = e z δ x ( y ) .
Later, we also will use the following notation:
( L i , y Ψ ( t , · , y ) ) ( x ) = ϰ i v 0 a i ( v ) [ Ψ ( t , x , y + v ) Ψ ( t , x , y ) ] , i = 1 , 2 .
Let M 1 ( t , x , y ) = E N 1 ( t , x , y ) be the first moment of N 1 ( t , x , y ) . Note that
M 1 ( t , x , y ) t = 2 F ( t , x , y ; z ) t z z = 0 .
Then, from Equation (84) we can derive the differential equation for the first moment taking the partial derivative of both sides of (84). Omitting the calculus we obtain
M 1 ( t , x , y ) t = ( L 1 , x M 1 ( t , · , y ) ) ( x ) + ( β μ 1 r ) M 1 ( t , x , y ) , M 1 ( 0 , x , y ) = δ x ( y ) .
As above, with the usage of the discrete Fourier transform (23), we can find the solution for this equation:
M ^ 1 ( t , θ , y ) = e i ( θ , y ) e ( β μ 1 r ) t e ϰ 1 a ^ 1 ( θ ) t .
Then, using the inverse Fourier transform (24), we will obtain:
M 1 ( t , x , y ) = e ( β μ 1 r ) t 1 ( 2 π ) d [ π , π ] d e ϰ 1 a ^ 1 ( θ ) t e i ( θ , y x ) d θ .
Remark 15.
Notice that from the obtained representation for M 1 ( t , x , y ) , we get that the first moment M 1 ( t , x , y ) is a function that depends on the difference of the considered sites on the lattice, so that
M 1 ( t , x , y ) = M 1 ( t , 0 , y x ) .
From Equation (84) (by taking partial derivative over parameter z twice and substituting z = 0 ) we can derive the differential equation for the second moment, which can be defined as M 2 ( t , x , y ) = E N 1 2 ( t , x , y ) :
M 2 ( t , x , y ) t = 3 F ( t , x , y ; z ) t z 2 z = 0 .
Consequently, omitting the calculus
M 2 ( t , x , y ) t = ( L 1 , x M 2 ( t , · , y ) ) ( x ) + ( β μ 1 r ) M 2 ( t , x , y ) + β ( 2 ) M 1 2 ( t , x , y ) , M 2 ( 0 , x , y ) = δ x ( y ) ,
where β ( 2 ) = n 2 n ( n 1 ) b n .
Apply the discrete Fourier transform (23) to this equation we obtain:
M ^ 2 ( t , θ , y ) t = ( ϰ 1 a ^ 1 ( θ ) + β μ 1 r ) M ^ 2 ( t , θ , y ) + β ( 2 ) ( 2 π ) d [ π , π ] d M ^ 1 ( t , θ ψ , y ) M ^ 1 ( t , ψ , y ) d ψ ; M ^ 2 ( 0 , θ , y ) = e i ( θ , y ) .
Remark 16.
As in Section 6.2, we consider the asymptotic behavior of M 2 ( t , x , y ) in case when random walk for the particles of the first type has finite variance of jumps, so that
v a 1 ( v ) | v | 2 < .
Consider Equation (86). The solution of this equation is the sum of the particular solution of Equation (86) and the solution of homogeneous equation
M 2 ( t , x , y ) t = ( L 1 , x M 2 ( t , · , y ) ) ( x ) + ( β μ 1 r ) M 2 ( t , x , y ) + β ( 2 ) M 1 2 ( t , x , y ) .
Let M 2 , h ( t , x , y ) be the solution of homogeneous equation and M 2 , p ( t , x , y ) be the particular one. Then, M 2 ( t , x , y ) = M 2 , h ( t , x , y ) + M 2 , p ( t , x , y ) .
Assume that β μ 1 r = 0 . Then, previous equation takes the form
M 2 ( t , x , y ) t = ( L 1 , x M 2 ( t , · , y ) ) ( x ) + β ( 2 ) M 1 2 ( t , x , y ) .
From Remark (9) and Equation (82), we get that M 2 , h ( t , x , y ) γ d t d / 2 , t . Note that from Remark 14 we have that M 1 ( t , x , y ) γ 1 / t as t for d = 1 and each x , y Z d . Let as t
f ( t ) = β ( 2 ) γ 1 2 ln t + o ( ln t ) .
Then, substituting f ( t ) into Equation (87) we have
β ( 2 ) γ 1 t + o ( 1 / t ) = β ( 2 ) γ 1 t + o ( 1 / t ) .
Then, as t , f ( t ) is the solution of Equation (87) and M 2 , p ( t , x , y ) f ( t ) , t , for each x , y Z d .
Similarly, for d 2 , we can find that, for each x , y Z d ,
M 2 , p ( t , x , y ) γ d 2 β ( 2 ) ( d 1 ) t d 1
Consequently, as M 2 ( t , x , y ) = M 2 , h ( t , x , y ) + M 2 , p ( t , x , y ) we obtain that, for each x , y Z d ,
M 2 ( t , x , y ) β ( 2 ) γ 1 2 ln t f o r d = 1 , M 2 ( t , x , y ) γ 2 γ d 2 β ( 2 ) t 1 f o r d = 2 , M 2 ( t , x , y ) γ d t d / 2 f o r d 3 .
We have thus obtained the asymptotic behavior of the first two moments of the random variable N 1 ( t , x , y ) . In the next section, we will examine the effect of intermittency (see definition in the next section ) for the random variable N 1 ( t , x , y ) using M i ( t , x , y ) , i = 1 , 2 .

6.4. Intermittency for N 1 ( t , x )

In Section 6.2 and Section 6.3, we obtained the solutions for the first two moments for the random variable N 1 ( t , x , y ) . Here, we will study the effect of intermittency in the simplest case for the number of particles of the first type. Introduce the following definition (see, for example, [21]).
Definition 4.
The field Λ ( t , x ) is called intermittent when t if
lim t E Λ 2 ( t , x ) E Λ ( t , x ) 2 = ,
where x Ω ( t ) , and Ω ( t ) is a non-decreasing family of sets.
Remark 17.
We are going to consider the effect of intermittency for random variable N 1 ( t , x , y ) . In our designations N 1 ( t , y x ) = N 1 ( t , 0 , y x ) .
In what follows, we are going to study the effect of intermittency in one area of x and y when | y x | = O ( t ) as t .
Denote by p ( t , x , y ) the solution of the following Cauchy problem
p ( t , x , y ) t = ( L 1 p ( t , · , y ) ) ( x ) , p ( 0 , x , y ) = δ x ( y ) .
Then, the representation from Equation (85) has the form
M 1 ( t , x , y ) = p ( t , x , y ) e ( β μ 1 r ) t .
Using Duhamel’s principle and Equation (88), from Equation (86) we obtain
M 2 ( t , x , y ) = M 1 ( t , x , y ) + β ( 2 ) 0 t w Z d M 1 ( t s , x , w ) M 1 2 ( s , w , y ) d s .
Remark 18.
In cases when underlying random walk has infinite variance of jumps, so that relation (see definition in [21](Equation (1.2))) when ν : = β μ 1 r > 0 , the results were obtained in [21](Th.1.2).
Now, consider the case where ν > 0 and the underlying random walk has superexponentially light tails of a random walk such that for all λ R d
z Z d e ( λ , z ) a 1 ( z ) < .
From [13](4.7) we have for t and | x y | = O ( t )
p ( t , x , y ) = e ( B 1 ( x y ) , ( x y ) ) / ( 2 t ) ( 2 π t ) d / 2 det B + o ( t d / 2 ) ,
where B = ( b ( k j ) ) is the matrix with elements
b ( k j ) = z Z d { 0 } z k z j a 1 ( z ) , k , j = 1 , , d .
Note that for all x , y Z d , t > 0
p ( t , x , y ) = [ π , π ] d e i ( θ , y x ) p ^ ( t , θ , 0 ) d θ = [ π , π ] d cos ( θ , y x ) p ^ ( t , θ , 0 ) d θ [ π , π ] d p ^ ( t , θ , 0 ) d θ = p ( t , 0 , 0 ) .
Then, for t
p ( t , 0 , 0 ) = C t d / 2 + o ( t d / 2 )
and 0 < p ( t , 0 , 0 ) < 1 , we have
p ( t , 0 , 0 ) < C ( t + 1 ) d / 2 .
For t , | x y | = O ( t ) and ν > 0 from Equation (88) and (89) we obtain
M 2 ( t , x , y ) M 1 2 ( t , x , y ) = M 1 ( t , x , y ) M 1 2 ( t , x , y ) + β ( 2 ) e ν t 0 t e ν s w Z d p ( t s , x , w ) p 2 ( s , w , y ) d s e 2 ν t p 2 ( t , x , y ) < 1 e ν t p ( t , x , y ) + β ( 2 ) 0 t e ν s p ( s , 0 , 0 ) w Z d p ( t s , x , w ) p ( s , w , y ) d s e ν t p 2 ( t , x , y )
Using the Kolmogorov–Chapman equation, we obtain
w Z d p ( t s , x , w ) p ( s , w , y ) = p ( t , x , y )
in the numerator of the last summand. Then, we can continue the estimation
= β ( 2 ) p ( t , x , y ) 0 t e ν s p ( s , 0 , 0 ) d s e ν t p 2 ( t , x , y ) + o ( 1 ) = β ( 2 ) 0 t e ν s p ( s , 0 , 0 ) d s e ν t p ( t , x , y ) + o ( 1 ) < C 0 t e ν ( s t ) t s + 1 d / 2 d s + o ( 1 )
Thus, the random variable N 1 ( t , x , y ) is non-intermittent for | x y | = O ( t ) .

6.5. The Second Moment for N 2 ( t , x )

In this section, we will set up the differential equation for the second order correlation function for N 2 ( t , x ) and determine the asymptotic behavior in the special case. Define the following correlation function for the second type particles
R 22 ( t , x , y ) = E [ N 2 ( t , x ) N 2 ( t , y ) ] .
To obtain the differential equation for R 22 ( t , x , y ) , we consider this random variable at time moment t + d t . Unlike the differential equation for N 1 ( t , x ) , here we consider two cases: x = y and x y .
Firstly, consider the case when x = y . Here, we have for R 22 ( t + d t , x , x ) :
R 22 ( t + d t , x , x ) = E N 2 2 ( t + d t , x ) = E N 2 2 ( t , x ) + 2 E N 2 ( t , x ) [ E [ ψ ( d t , x ) | F t ] [ + E E [ ψ 2 ( d t , x ) | F t ] ] = R 22 ( t , x , x ) + 2 E N 2 ( t , x ) × [ ϰ 2 z 0 a 2 ( z ) N 2 ( t , x + z , x ) d t + r N 1 ( t , x ) d t μ 2 N 2 ( t , x ) d t ϰ 2 N 2 ( t , x ) d t + o ( d t ) ] + E [ ϰ 2 z 0 a 2 ( z ) N 2 ( t , x + z , x ) d t + r N 1 ( t , x ) d t + μ 2 N 2 ( t , x ) d t + ϰ 2 N 2 ( t , x ) d t + o ( d t ) ] = R 22 ( t , x , x ) + 2 ( L 2 , x R 22 ( t , · , x ) ) ( x ) d t + 2 r R 12 ( t , x , x ) d t + ( L 2 R 2 ( t , · ) ) ( x ) d t 2 μ 2 R 22 ( t , x , x ) d t + 2 r R 1 ( t , x ) d t + μ 2 R 2 ( t , x ) d t + 2 ϰ 2 R 2 ( t , x ) d t + o ( d t ) .
Let R 12 ( t , x , y ) = E [ N 1 ( t , x ) N 2 ( t , y ) ] . Then, as d t 0 we obtain the differential equation for R 22 ( t , x , x ) :
R 22 ( t , x , x ) t = 2 ( L 2 , x R 22 ( t , · , x ) ) ( x ) 2 μ 2 R 22 ( t , x , x ) + 2 r R 12 ( t , x , x ) + ( L 2 R 2 ( t , · ) ) ( x ) + r R 1 ( t , x ) + μ 2 R 2 ( t , x ) + 2 ϰ 2 R 2 ( t , x ) ; R 22 ( 0 , x , x ) = 0 .
Now, consider the case when x y . Then,
R 22 ( t , x , y ) = E E [ ( N 2 ( t , x ) + ψ ( d t , x ) ) ( N 2 ( t , y ) + ψ ( d t , y ) ) | F t ] = R 22 ( t , x , y ) + E N 2 ( t , x ) [ ϰ 2 z 0 a 2 ( z ) N 2 ( t , y + z ) r N 1 ( t , y ) d t μ 2 N 2 ( t , y ) d t ϰ 2 N 2 ( t , y ) d t + o ( d t ) ] + E N 2 ( t , y ) [ ϰ 2 z 0 a 2 ( z ) N 2 ( t , x + z ) r N 1 ( t , x ) d t μ 2 N 2 ( t , x ) d t ϰ 2 N 2 ( t , x ) d t + o ( d t ) ] E [ ϰ 2 a 2 ( x y ) N 2 ( t , x ) d t + ϰ 2 a 2 ( y x ) N 2 ( t , y ) d t + o ( d t ) ] = R 22 ( t , x , y ) + ( L 2 , x R 22 ( t , · , y ) ) ( x ) d t + ( L 2 , y R 22 ( t , · , y ) ) ( x ) d t 2 μ 2 R 22 ( t , x , y ) d t + r ( R 12 ( t , x , y ) + R 12 ( t , y , x ) ) d t ϰ 2 a 2 ( x y ) R 2 ( t , x ) + R 2 ( t , y ) d t + o ( d t ) .
Therefore, as d t 0 we have the differential equation for R 22 ( t , x , y ) when x y :
R 22 ( t , x , y ) t = ( L 2 , x R 22 ( t , · , y ) ) ( x ) + ( L 2 , y R 22 ( t , · , y ) ) ( x ) 2 μ 2 R 22 ( t , x , x ) ϰ 2 a 2 ( x y ) R 2 ( t , x ) + R 2 ( t , y ) + r R 12 ( t , x , y ) + R 12 ( t , y , x ) ; R 22 ( 0 , x , y ) = 0 .
Above, we have defined function R 12 ( t , x , y ) = E N 1 ( t , x ) N 2 ( t , y ) . This is unknown function, consequently, we need to obtain the differential for this function. Using the same technique as for R 22 ( t , x , y ) we get:
R 12 ( t + d t , x , y ) = E [ E [ ( N 1 ( t , x ) + ξ ( d t , x ) ) ( N 2 ( t , y ) + ψ ( d t , y ) ) | F t ] ] = R 12 ( t , x , y ) + ( β μ 1 r μ 2 ) R 12 ( t , x , y ) d t + r E [ N 1 ( t , x ) N 1 ( t , y ) ] d t δ x ( y ) r R 1 ( t , x ) d t + ( L 1 , x R 12 ( t , · , y ) ) ( x ) + ( L 2 , y R 12 ( t , · , y ) ) ( x ) d t + o ( d t ) ,
Then, as d t 0 the differential equation for R 12 ( t , x , y ) is
R 12 ( t , x , y ) t = ( L 1 , x R 12 ( t , · , y ) ) ( x ) + ( L 2 , y R 12 ( t , · , y ) ) ( x ) + ( β μ 1 r μ 2 ) R 12 ( t , x , y ) + r E [ N 1 ( t , x ) N 1 ( t , y ) ] δ x ( y ) r R 1 ( t , x ) ; R 12 ( 0 , x , y ) = 0 .
Let R 11 ( t , x , y ) = E [ N 1 ( t , x ) N 1 ( t , y ) ] . To get the behavior of this function, we also need to have the differential equation for it. Then, consider R 11 ( t + d t , x , y ) :
R 11 ( t + d t , x , y ) = R 11 ( t , x , y ) + 2 ( β μ 1 r ) R 11 ( t , x , y ) d t + ( L 1 , x R 11 ( t , · , y ) ) ( x ) + ( L 1 , y R 11 ( t , · , y ) ) ( x ) d t + δ x ( y ) ( ( n 2 ( n 1 ) 2 b n + μ 1 + r + 2 ϰ 1 ) R 1 ( t , x ) + ( L 1 R 1 ( t , · ) ) ( x ) ) d t ϰ 1 a 1 ( x y ) R 1 ( t , x ) + R 1 ( t , y ) d t + o ( d t ) .
If d t 0 , we obtain R 11 ( t , x , y ) :
R 11 ( t , x , y ) t = ( L 1 , x R 11 ( t , · , y ) ) ( x ) + ( L 1 , y R 11 ( t , · , y ) ) ( x ) + 2 ( β μ 1 r ) R 11 ( t , x , y ) + δ x ( y ) ( β + μ 1 + r ) R 1 ( t , x ) ( L 1 R 1 ( t , · ) ) ( x ) ϰ 1 a 1 ( x y ) R 1 ( t , x ) + R 1 ( t , y ) ; R 11 ( 0 , x , y ) = δ 0 ( x ) δ 0 ( y ) .
In the future calculus, we need the following lemma.
Lemma 6.
Let G ( t , x , y ) : = R 1 ( t , x ) R 2 ( t , y ) and K ( t , x , y ) = R 1 ( t , x ) R 1 ( t , y ) . Then, functions G ( t , x , y ) and K ( t , x , y ) satisfy the following differential equations
G ( t , x , y ) t = ( L 1 , x G ( t , · , y ) ) ( x ) + ( L 2 , y G ( t , · , y ) ) ( x ) + ( β μ 1 r μ 2 ) G ( t , x , y ) + r K ( t , x , y ) ; G ( 0 , x , y ) = 0 .
K ( t , x , y ) t = ( L 1 , x K ( t , · , y ) ) ( x ) + ( L 1 , y K ( t , · , y ) ) ( x ) + 2 ( β μ 1 r ) K ( t , x , y ) ; K ( 0 , x , y ) = δ 0 ( x ) δ 0 ( y ) .
Proof. 
Note that
G ( t , x , y ) t = R 2 ( t , y ) R 1 ( t , x ) t + R 1 ( t , x ) R 2 ( t , y ) t .
Then, with the usage of Equations (78) and (79), we have
G ( t , x , y ) t = R 2 ( t , y ) R 1 ( t , x ) t + R 1 ( t , x ) R 2 ( t , y ) t = R 2 ( t , y ) ( ( β μ 1 r ) R 1 ( t , x ) + ( L 1 R 1 ( t , · ) ) ( x ) ) + R 1 ( t , x ) ( L 2 R 2 ( t , · ) ) ( y ) μ 2 R 2 ( t , y ) + r R 1 ( t , y ) = ( L 1 , x G ( t , · , y ) ) ( x ) + ( L 2 , y G ( t , · , y ) ) ( x ) + ( β μ 1 r μ 2 ) G ( t , x , y ) + r K ( t , x , y ) .
Similarly, we can obtain the differential equation for K ( t , x , y ) . Notice that
K ( t , x , y ) t = R 1 ( t , y ) R 1 ( t , x ) t + R 1 ( t , x ) R 1 ( t , y ) t .
Consequently, using Equation (78), we get
K ( t , x , y ) t = R 1 ( t , y ) R 1 ( t , x ) t + R 1 ( t , x ) R 1 ( t , y ) t = R 1 ( t , y ) ( ( β μ 1 r ) R 1 ( t , x ) + ( L 1 R 1 ( t , · ) ) ( x ) ) + R 1 ( t , x ) ( β μ 1 r ) R 1 ( t , y ) + ( L 1 R 1 ( t , · ) ) ( y ) = ( L 1 , x K ( t , · , y ) ) ( x ) + ( L 1 , y K ( t , · , y ) ) ( x ) + 2 ( β μ 1 r ) K ( t , x , y ) .
The initial conditions follow from
G ( 0 , x , y ) = R 1 ( 0 , x ) R 2 ( 0 , y ) = 0 , K ( 0 , x , y ) = R 1 ( 0 , x ) R 1 ( 0 , y ) = δ 0 ( x ) δ 0 ( y ) .
Find out the asymptotic behavior of the second moment R 22 ( t , x , y ) in the particular case where we consider Z d . Let the generators of the random walk be identical for both types of particles, so that L 1 = L 2 = : L . Assume that the random walk with generator L has finite variance of jumps (see (48)). For A = β μ 1 r > 0 and μ 2 = 0 , it was found in Section 6.2 that, as t , for each x Z d ,
R 1 ( t , x ) e A t t d / 2 , R 2 ( t , x ) r e A t A t d / 2 .
Note that from Lemma 6 and Equation (91), we obtain
( K ( t , x , y ) R 11 ( t , x , y ) ) t = ( L 1 , x ( K ( t , · , y ) R 11 ( t , · , y ) ) ) ( x ) + ( L 1 , y ( K ( t , · , y ) R 11 ( t , · , y ) ) ) ( x ) + 2 ( β μ 1 r ) ( K ( t , x , y ) R 11 ( t , x , y ) ) + F ( R 1 ( t , x ) ) ,
where F ( R 1 ( t , x ) ) is the function which linearly depends on R 1 ( t , x ) . Then, from the representation of R 1 ( t , x ) as t we have, for each x Z d ,
F ( R 1 ( t , x ) ) C ( y x ) e A t t d / 2 ,
where C ( · ) function which can be obtained from Equation (91). Then, as t
K ( t , x , y ) R 11 ( t , x , y ) = C e A t t d / 2 , C constant .
The same technique helps us to find out that
G ( t , x , y ) R 12 ( t , x , y ) = C e A t t d / 2 , t C constant .
Due to the homogeneity of space, we can consider the following values: R 22 ( t , u ) : = R 22 ( t , 0 , y x ) .
Using the above results, rewrite the equation for the second correlation function:
R 22 ( t , u ) t = 2 ( L R 22 ( t , · ) ) ( u ) + r 2 γ 1 2 ( u ) e 2 A t t d + δ 0 ( u ) r γ 1 ( u ) e A t t d / 2 a 2 ( u ) r γ 2 ( u ) e A t 2 π t ; R 22 ( 0 , u ) = 0 .
Divide the last equation into two equations and find the solutions R 22 ( 1 ) ( t , u ) and R 22 ( 2 ) ( t , u ) of the following equations:
R 22 ( 1 ) ( t , u ) t = 2 ( L R 22 ( 1 ) ( t , · ) ) ( u ) + r 2 γ 1 ( u ) e 2 A t t d , R 22 ( 1 ) ( 0 , u ) = 0 ; R 22 ( 2 ) ( t , u ) t = 2 ( L R 22 ( 2 ) ( t , · ) ) ( u ) + δ 0 ( u ) r e A t 2 π t a 2 ( u ) r e A t t d / 2 , R 22 ( 2 ) ( 0 , u ) = 0 .
Then, we will have R 22 ( t , u ) = R 22 ( 1 ) ( t , u ) + R 22 ( 2 ) ( t , u ) . The solution of the first equation for large t is asymptotically as follows: R 22 ( 1 ) ( t , u ) C 1 ( u ) e 2 A t t d for each u Z d , whereas the solution of the second equation is asymptotically as follows: R 22 ( 2 ) ( t , u ) C 2 ( u ) e A t t d / 2 for each u Z d . Thus, R 22 ( t , u ) C 1 ( u ) e 2 A t t d for each u Z d .
Here, for a fixed space coordinate, we do not have the intermittency effect:
R 22 ( t , x , x ) R 2 2 ( t , x ) c o n s t < , t ,
for each x Z d .

7. Simulation of BRW

This section is devoted to process modeling using the Python programming language. We consider the state of the system as an array whose elements are lists of the form [ i , x , t 1 , t 2 ] , where i characterizes the type of a particle and x = ( x 1 , , x d ) is its spatial coordinate, t 1 is the time of its entry into a given position (it was born at x at this time or jumped at x at this time), t 2 is the time of its exit from this position (it died at x at this time or jumped out of x at this time). Recall that we perceive all events related to the reproduction of offspring, including the degeneration from one type to another, as the death of the parent particle with the production of k descendants of the first type and l of the second type.
Initialization. We set the characteristics of BRW, i = 1 , 2 :
  • d is the lattice dimension;
  • R is the array consisting of a finite number of lists [ i , x , 0 , 0 ] characterizing types i and particle positions x = ( x 1 , , x d ) at the initial moment of time;
  • ϰ i are diffusion coefficients;
  • A i = ( a i ( x , y ) ) are matrices of the random walk intensities, by which the generators (3) are determined;
  • μ i 0 are the death intensities;
  • β i ( k , l ) 0 are the birth intensities;
  • r 0 is the intensity of degeneration from the first type to the second;
  • T > 0 is the duration of evolution under consideration.
Algorithm step. We select one of the elements [ i , x , t 1 , t 2 ] of the array R such that t 2 < T . The particle spends exponential time d t at the current position x, after which it does one of the following:
  • with probability μ i / ( ϰ i + μ i + k + l 2 β i ( k , l ) + r i ) dies;
  • with probability β i ( k , l ) / ( ϰ i + μ i + k + l 2 β i ( k , l ) + r i ) divides into k + l particles, then we append k lists [ 1 , x , t 2 , t 2 + d t ] and l lists [ 2 , x , t 2 , t 2 + d t ] to the array;
  • with probability ϰ i a ( z ) / ( ϰ i + μ i + k + l 2 β i ( k , l ) + r i ) jumps from position x to position x + z x , then we append [ i , x , t 2 , t 2 + d t ] to the array;
  • with probability r / ( ϰ 1 + μ 1 + k + l 2 β 1 ( k , l ) + r ) turns into a particle of the second type, then we append [ 2 , x , t 2 , t 2 + d t ] to the array.
Considered [ i , x , t 1 , t 2 ] moves from array R ( p r o c e s s i n g ) to array H ( h i s t o r y ) .
Stop condition. Algorithm steps are followed until there are elements [ i , x , t 1 , t 2 ] in the array R such that t 2 < T .
Data analysis. After the process is completed, the entire history of particles in different states is located in arrays R and H. To find out the number, type and spatial configuration of particles at time t, we select those elements [ i , x , t 1 , t 2 ] of the array H for which t 1 t < t 2 .
Simulations. Suppose d = 1 and at time t = 0 there are 300 initial particles on segment [ 0 , 300 ] Z . The random walk for the particles of the first type has intensities a 1 ( z ) = 1 / 2 for | z | = 1 and ϰ 1 = 1 , for the second type a 2 ( z ) = 1 / 6 for | z | = 1 , 2 , 3 and ϰ 2 = 4 . Figure 1 shows the simulation with parameters μ 1 = 0.25 , β 1 ( 2 , 0 ) = 0.125 , β 1 ( 1 , 1 ) = 0.125 , μ 2 = 0.375 , β 2 ( 0 , 2 ) = 0.125 , β 2 ( 1 , 1 ) = 0.25 , all other birth/death intensities equal to 0. It demonstrates the clustering effect in the case of the critical branching law described in Section 5.
Suppose d = 2 , at the initial time t = 0 there are 200 particles of the first type at ( x 1 , x 2 ) = ( 0 , 0 ) . We present the results for the case when the particles of the second type cannot produce offsprings (this was considered in the example), with the following parameters: The walk of the particles of the first type is set with a generator L 1 with ϰ 1 = 1 and intensities a 1 ( z ) = 1 / 80 , where z = { ( z 1 , z 2 ) ( 0 , 0 ) : z 1 , z 2 Z , 4 z 1 4 , 4 z 2 4 } . The walk of particles of the second type is set using a generator L 2 with ϰ 2 = 1 and intensities a 2 ( z ) = 1 / 24 , where z = { ( z 1 , z 2 ) ( 0 , 0 ) : z 1 , z 2 Z , 2 z 1 2 , 2 z 2 2 } . We record the number and arrangement of particles of both types at 6 time points. This simulation follows the model in Section 6. Figure 2 shows the simulation with parameters μ 1 = 0.05 , β 1 ( 2 , 0 ) = 0.5 , r = 0.45 , all other birth/death intensities equal to 0.

8. Conclusions

In this work, we considered a continuous-time branching random walk with two types of particles. The main results were devoted to the study of the limiting behavior of the moments of subpopulations generated by a single particle of each type. In particular, in Section 3, we have derived the differential equations for the first moments of subpopulations and found their solutions, which allows us to find exact expressions for their asymptotics. Similar results for the second moments were obtained in Section 4. In Section 5, we have shown that for particles of both types with the underlying recurrent random walks on Z d a phenomenon of clustering of particles can be observed over long times, which means that the majority of particles are concentrated in some particular areas. The obtained results were then applied in Section 6 to study epidemic propagation. In this model, we considered two types of particles: infected and immunity generated. At the beginning, there is an infected particle that can infect others. Here, for the local number of particles of each type at a lattice point, we study the moments and their limiting behavior. Additionally, the effect of intermittency of the infected particles was studied for a supercritical branching process at each lattice point. To demonstrate the effect of limit clustering for the epidemiological model, we provided the results of a numerical simulation in Section 7.
We would like to emphasize that the present work is primarily devoted to a theoretical analysis of the effects that occur in branching random walks with two types of particles. We have tried to illustrate the obtained theoretical results by a numerical simulation. However, this simulation should not be considered as a full-fledged numerical analysis of the considered situation. Therefore, the question of developing real programs (in Python, R or any other language) for the numerical analysis of the considered processes arises quite naturally. In this paper, the authors did not undertake such a task, mainly because the fact that the computational aspects of modeling multidimensional processes are a special science that can hardly be treated professionally and completely in one or a few sections of even such an extensive article as ours. Possibly, further special studies will be devoted to it.

Author Contributions

Conceptualization, S.M. and E.Y.; Formal analysis, I.M.; Funding acquisition, E.Y.; Investigation, I.M., D.B. and E.Y.; Methodology, E.Y.; Visualization, D.B.; Writing—original draft, I.M. and D.B.; Writing—(review and editing), E.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Russian Foundation for the Basic Research (RFBR), grant number 20-01-00487, and the Russian Science Foundation (RSF), grant number 20-11-20119. Iu. Makarova, D. Balashova and E. Yarovaya were funded by the Russian Foundation for the Basic Research (RFBR); S. Molchanov was funded by the Russian Science Foundation (RSF).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors thank the anonymous reviewers for numerous valuable comments which, we hope, have substantially improved the presentation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yarovaya, E.B. Branching Random Walks in a Heterogeneous Environment; Center of Applied Investigations of the Faculty of Mechanics and Mathematics of the Moscow State University: Moscow, Russian, 2007. (In Russian) [Google Scholar]
  2. Bulinskaya, E.V. Spread of a catalytic branching random walk on a multidimensional lattice. Stochastic Process. Appl. 2018, 128, 2325–2340. [Google Scholar] [CrossRef] [Green Version]
  3. Barczy, M.; Li, Z.; Pap, G. Stochastic differential equation with jumps for multi-type continuous state and continuous time branching processes with immigration. ALEA Lat. Am. J. Probab. Math. Stat. 2015, 12, 129–169. [Google Scholar]
  4. Barczy, M.; Nedényi, F.K.; Pap, G. On aggregation of multitype Galton-Watson branching processes with immigration. Mod. Stoch. Theory Appl. 2018, 5, 53–79. [Google Scholar] [CrossRef] [Green Version]
  5. Makarova, Y.; Han, D.; Molchanov, S.; Yarovaya, E. Branching random walks with immigration. Lyapunov stability. Markov Process. Related Fields 2019, 25, 683–708. [Google Scholar]
  6. Makarova, Y.; Kutsenko, V.; Yarovaya, E. On Two-Type Branching Random Walks and Their Applications for Genetic Modelling. In Recent Developments in Stochastic Methods and Applications; Shiryaev, A.N., Samouylov, K.E., Kozyrev, D.V., Eds.; Springer: Cham, Switzerland, 2021; Volume 371, pp. 255–268. [Google Scholar] [CrossRef]
  7. Molchanov, S.; Whitmeyer, J. Spatial models of population processes. In Modern Problems of Stochastic Analysis and Statistics; Springe: Cham, Switzerland, 2017; Volume 208, pp. 435–454. [Google Scholar] [CrossRef]
  8. Sevast’yanov, B.A. Vetvyashchiesya Protsessy; Izdat. “Nauka”: Moscow, Russian, 1971; p. 436. (In Russian) [Google Scholar]
  9. Braunsteins, P.; Hautphenne, S. Extinction in lower Hessenberg branching processes with countably many types. Ann. Appl. Probab. 2019, 29, 2782–2818. [Google Scholar] [CrossRef] [Green Version]
  10. Braunsteins, P.; Hautphenne, S. The probabilities of extinction in a branching random walk on a strip. J. Appl. Probab. 2020, 57, 811–831. [Google Scholar] [CrossRef]
  11. Vatutin, V.; Wachtel, V. Multi-type subcritical branching processes in a random environment. Adv. Appl. Probab. 2018, 50, 281–289. [Google Scholar] [CrossRef] [Green Version]
  12. Vatutin, V.A.; D’yakonova, E.E. The survival probability for a class of multitype subcritical branching processes in random environment. Math. Notes 2020, 107, 189–200. [Google Scholar] [CrossRef] [Green Version]
  13. Molchanov, S.A.; Yarovaya, E.B. Large deviations for a symmetric branching random walk on a multidimensional lattice. Proc. Steklov Inst. Math. 2013, 282, 186–201. [Google Scholar] [CrossRef]
  14. Yarovaya, E.B.; Stoyanov, J.M.; Kostyashin, K.K. On conditions for a probability distribution to be uniquely determined by its moments. Theory Probab. Appl. 2020, 64, 579–594. [Google Scholar] [CrossRef]
  15. Stoyanov, J.M. Counterexamples in Probability; Dover Publications, Inc.: Mineola, NY, USA, 2013. [Google Scholar]
  16. Yarovaya, E.B. Spectral properties of evolutionary operators in branching random walk models. Math. Notes 2012, 92, 115–131. [Google Scholar] [CrossRef]
  17. Filippov, A.F. Sbornik Zadach po Differentsial’nym Upravleniyam. Uchebnoe Posobie; Moskva: Nauka, Russian, 1992; p. 128. [Google Scholar]
  18. Molchanov, S.A.; Yarovaya, E.B. Limit theorems for the Green function of the lattice Laplacian under large deviations for a random walk. Izv. Math. 2012, 76, 1190–1217. [Google Scholar] [CrossRef]
  19. Balashova, D.; Molchanov, S.; Yarovaya, E. Structure of the particle population for a branching random walk with a critical reproduction law. Methodol. Comput. Appl. Probab. 2021, 23, 85–102. [Google Scholar] [CrossRef] [Green Version]
  20. Gikhman, I.I.; Skorokhod, A.V. Vedenie v Teoriyu Sluchaĭnykh Protsessov, 2nd ed.; Izdat. “Nauka”: Moscow, Russian, 1977; p. 567. [Google Scholar]
  21. Getan, A.; Molchanov, S.; Vainberg, B. Intermittency for branching walks with heavy tails. Stoch. Dyn. 2017, 17, 1750044. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Particle populations on Z 1 .
Figure 1. Particle populations on Z 1 .
Mathematics 10 00867 g001
Figure 2. Particle populations on Z 2 .
Figure 2. Particle populations on Z 2 .
Mathematics 10 00867 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Makarova, I.; Balashova, D.; Molchanov, S.; Yarovaya, E. Branching Random Walks with Two Types of Particles on Multidimensional Lattices. Mathematics 2022, 10, 867. https://doi.org/10.3390/math10060867

AMA Style

Makarova I, Balashova D, Molchanov S, Yarovaya E. Branching Random Walks with Two Types of Particles on Multidimensional Lattices. Mathematics. 2022; 10(6):867. https://doi.org/10.3390/math10060867

Chicago/Turabian Style

Makarova, Iuliia, Daria Balashova, Stanislav Molchanov, and Elena Yarovaya. 2022. "Branching Random Walks with Two Types of Particles on Multidimensional Lattices" Mathematics 10, no. 6: 867. https://doi.org/10.3390/math10060867

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop