Next Article in Journal
Non-Specific Binding, a Limitation of the Immunofluorescence Method to Study Macrophages In Situ
Previous Article in Journal
MADS-Box Protein Complex VvAG2, VvSEP3 and VvAGL11 Regulates the Formation of Ovules in Vitis vinifera L. cv. ‘Xiangfei’
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Gene Expression Revisited

by
Andrzej Tomski
1 and
Maciej Zakarczemny
2,*
1
Institute of Mathematics, University of Silesia in Katowice, 40-007 Katowice, Poland
2
Department of Applied Mathematics, Faculty of Computer Science and Telecommunications, Cracow University of Technology, 31-155 Cracow, Poland
*
Author to whom correspondence should be addressed.
Genes 2021, 12(5), 648; https://doi.org/10.3390/genes12050648
Submission received: 4 March 2021 / Revised: 20 April 2021 / Accepted: 22 April 2021 / Published: 26 April 2021
(This article belongs to the Section Technologies and Resources for Genetics)

Abstract

:
We investigate the model of gene expression in the form of Iterated Function System (IFS), where the probability of choice of any iterated map depends on the state of the phase space. Random jump times of the process mark activation periods of the gene when pre-mRNA molecules are produced before mRNA and protein processing phases occur. The main idea is inspired by the continuous-time piecewise deterministic Markov process describing stochastic gene expression. We show that for our system there exists a unique invariant limit measure. We provide full probabilistic description of the process with a comparison of our results to those obtained for the model with continuous time.

1. Introduction

An interesting problem in the field of modeling of biological processes [1] has been to understand the interactions in gene regulatory networks. Information on various approaches to describe relations between genes can be found in the paper [2]. Numerous methods based on chemical networks [3], logical networks [4] or dynamical systems [5] are used. As [6] suggests, piecewise deterministic stochastic processes can be used to may model genetic patterns. Our paper belongs to this methodology, but it investigates a discrete-time analogue of the ordinary differential equation case. A more common approach would be to use Markov jump processes, which lead to chemical master equations (CME) considered in discrete state spaces [7]. There are several methods to solve CME’s, including finding exact solution (i.e., by means of Poisson representation) or approximation methods. Unfortunately, all these methods can only approximate the solution of the CME or they can be applied in particular cases. Moreover, most of related studies generally focus on the translation phase, without putting any importance to the transcription phase or the intermediate mRNA processing. The main advantage of the analysis derived from piecewise deterministic stochastic processes is the potential to extend a model simply by adding new types of particles to the stochastic reaction network. Our approach, dependent on piecewise deterministic stochastic process combines deterministic approach represented by dynamical systems with stochastic effects represented by Markov processes. In many cases, discrete time or continuous-time dynamical systems became two alternative ways to describe the dynamics of a network. The formalism of discrete-time systems does not concentrate on instantaneous changes in the level of gene expression but rather on the overall change in a given time interval. This may be the right approach to model processes where some reactions must be integrated over a short timeline for the purpose of revealing more important interactions affecting expression levels with respect to a larger time perspective. Another aspect is that the experimental data obtained from living cells are undoubtedly discrete in time and because of the costs we are limited only to relatively small sets of samples [8]. In recent years, difference equation models appeared (see [9,10,11]). In this work we concentrate on the gene expression process with four stages: activation of the gene, being followed then by pre-mRNA and mRNA and protein processing [12].
Basically, after a gene is activated at a random time moment, mature mRNA is produced in the nucleus, then it is transported to the cytoplasm, where the protein translation follows. However, it is known that translated mRNA molecules must get through further processing first, before a new protein particle is formed. Besides that, many sources [13] claim that at least one additional phase, primary transcript (pre-mRNA) processing also takes place. Actually, in the world of eukaryotic genes, after activation at a random time point, the DNA is transformed into some certain pre-mRNA form of transcript. Then, the non-coding sequences (introns) of transcript are removed and the coding (exons) regions are combined. This process is called mRNA splicing. In addition to the splicing step, pre-mRNA processing also includes at least three other processes: addition of the m7G cap at the 5 end to increase the stability, polyadenylation at the 3 -UTR which affects the miRNA regulation and RNA degradation, and post-transcriptional modifications (methylation). In some genes, there is an extra step of RNA editing. Multiple other genetic modifications take place under the general term called RNA processing. In such a situation we finally get a functional form of mRNA, which is transferred into the cytoplasm, where in the translation phase, mRNA is decoded into a protein. Of course, both mRNA and protein undergo biological degradation. The presence of a random component in our model, responsible for switching between active and inactive states of the gene in the random time moments has been identified in the continuous case as a piecewise deterministic Markov process (PDMP) [14]. This class of stochastic processes can be considered to be randomly switching dynamical systems with the intensities of the consecutive jumps dependent on the current state of the phase. However, if we consider discrete-time scale, then we must investigate iterated function systems (IFS’s) with place-dependent probabilities, see [15] or [16]. We are going to unify a common approach for both time continuous and time discrete dynamical systems with random jumps. We will investigate the existence of stationary distributions for time discrete dynamical systems with random jumps and compare its form with the continuous-time case. Here we introduce jump intensity functions, which play crucial role in the distribution of waiting time for the jump [17] and for this purpose we provide an appropriate cumulative distribution function. Specifically, instead of exp { 0 t q ( π ( s , x 0 ) ) d s } in the continuous case (see [17]), we justify the formula for the life-span function exp { s = 0 t 1 q ( π ( s , x 0 ) ) } in the discrete case. In this way we obtain certain IFS corresponding to a discrete-time Markov process with jumps characterized by jump intensity functions.
A consequence of the stochastic expression is the diversity of the population in terms of the composition of individual proteins and gene expression profiles [13]. Stochastic gene expression causes expression variability within the same organism or tissue, which has effect on biological function.
This work is organized as follows. In Section 3 we present the model, and we give the definition of our process. In Section 4 and Section 5 we investigate its properties and we describe it as an IFS with place-dependent probabilities. In Section 6, we use the classical result of Barnsley [18], to show that our process converges in distribution to a unique invariant measure when the number of iterations converges to infinity and we describe the properties of this measure in Section 7. A complete step-by-step description of the whole process, summing up all the information from the earlier sections, is provided in Section 8. A computer simulation of trajectories of the process, is the content of Section 9, with the source code available in GitHub [19]. In Section 10 presents the derivation of formulas for the support of the invariant measure. Summary is the last section of this paper.

2. Methods

In this paper, we investigate a model which is based on IFS with place-dependent probabilities. Compared to the model presented in [17], we replace ordinary differential equations (ODEs) by the system of difference equations, which leads to the investigation of discrete-time model, but can be generalized into continuous case. The question therefore arises how we justify the usage of difference equations in our model. Our justification is that our results remain consistent with the work from [17] which can be considered as an alternative way of description of such systems. Discrete approach is an attempt to mathematical formulation of the problem using different tools. This paper provides discussion between different approaches (sometimes different than standard accepted principles).

3. Stochastic Gene Expression—Discrete Case

Gene expression is a very complex biological process including multiple essential subprocesses. In the continuous case, Lipniacki et al. [6] introduced a model based mathematically on piecewise deterministic Markov process which includes three crucial phases: gene activation, mRNA and protein processing.
Let ξ 1 ( t ) denote the number of pre-mRNA molecules at time t , ξ 2 ( t ) denote the number of mRNA molecules at time t , ξ 3 ( t ) denote the number of protein molecules at time t , where in general t [ 0 , ) . Analogically to the solution of continuous model, we can introduce the symbol π Δ t i ( ξ ( t ) ) , where ξ ( t ) = ( ξ 1 ( t ) , ξ 2 ( t ) , ξ 3 ( t ) ) . A discrete-time model would evaluate π Δ t i ( ξ ( t ) ) after Δ t starting from ξ ( t ) .
The difference equation then could be given by equation of the following kind:
Δ ξ ( t ) = π Δ t i ( ξ ( t ) ) ξ ( t ) = ξ ( t + Δ t ) ξ ( t ) .
Thus, our approach is based on particular translation Δ t . In the paper we fix the value of Δ t . For the sake of simplicity, we denote Δ t = δ . Let f : R R 3 , typically in the theory of linear difference equations, we define Δ f ( t ) = f ( t + 1 ) f ( t ) . In our model, we take Δ f ( t ) = f ( t + δ ) f ( t ) , hence δ is a time step, instead of unity. Please note that one could use the basic techniques of scaling variables to get unity, instead of δ . We consider the following model being represented by the system of difference equations in the form.
Δ ξ 1 ( t ) = ξ 1 ( t + δ ) ξ 1 ( t ) = R γ ( t ) ( C + μ P R ) ξ 1 ( t ) Δ ξ 2 ( t ) = ξ 2 ( t + δ ) ξ 2 ( t ) = C ξ 1 ( t ) μ R ξ 2 ( t ) Δ ξ 3 ( t ) = ξ 3 ( t + δ ) ξ 3 ( t ) = P ξ 2 ( t ) μ P ξ 3 ( t ) ,
where t 0 , t δ Z ; R > 0 is the speed of synthesis of pre-mRNA molecules if the gene is active; C > 0 is the rate of converting pre-mRNA into active mRNA molecules; μ P R > 0 is the pre-mRNA degradation rate; μ R > 0 is the mRNA degradation rate; P > 0 is the rate of converting mRNA into protein molecules and μ P > 0 is the protein degradation rate (see Figure 1). Provided the time step δ is small enough, μ P R , μ R , μ P , R, C, and P will be independent of δ , see [10]. Here γ ( t ) : [ 0 , ) { 0 , 1 } such that
γ ( t ) = i if t [ 0 , t 1 ) 1 i if t = t 1 ,
where t 1 denotes the moment of first jump of this process, where the distribution of t 1 is described by life-span function in Section 3.1.
The values of the coefficients are scaled simultaneously to the interval [ 0 , 1 ] so that their relative importance in the model can be more easily seen. Basically, there is no biological reason behind imposing any restrictions on the values of the parameters. However, we need this step to perform mathematical analysis of the asymptotic behavior of this system. We can transform almost any system in such way. An exception is the case when the system (1) reduces to less than three equations. It can happen, when some coefficients are equal to zero or they are equal one to another. We do not analyze such cases here. An open question then remains, what happens, for example, when C + μ P R = μ R or μ R + P = 1 and other similar situations, described below. To avoid degenerate cases, we will assume in the model (1) that:
0 < C + μ P R < 1 , 0 < μ R + P < 1 , 0 < μ P < 1 ,
since the number of degraded molecules cannot exceed the current number of corresponding molecules.
If we assume that γ ( t ) i { 0 , 1 } = I , we obtain the following system of linear difference equations:
Δ ξ 1 ( t ) = R i ( C + μ P R ) ξ 1 ( t ) Δ ξ 2 ( t ) = C ξ 1 ( t ) μ R ξ 2 ( t ) Δ ξ 3 ( t ) = P ξ 2 ( t ) μ P ξ 3 ( t ) ,
with initial condition ξ ¯ = ( ξ 1 ( 0 ) , ξ 2 ( 0 ) , ξ 3 ( 0 ) ) , where t δ Z , t 0 .
Please note that Δ ξ k ( t ) = ξ k ( t + δ ) ξ k ( t ) , k { 1 , 2 , 3 } .
Example 1.
Let us consider the system (4) with the values of parameters R = 1 , μ P R = 1 4 , C = μ R = 1 4 , P = μ P = 1 3 . In this case, the solutions of (4) are:
ξ 1 ( t ) = ( 1 2 ) t δ ( ξ 1 ( 0 ) 2 i ) + 2 i ξ 2 ( t ) = ( 3 4 ) t δ ( ξ 2 ( 0 ) + ξ 1 ( 0 ) 4 i ) ( 1 2 ) t δ ( ξ 1 ( 0 ) 2 i ) + 2 i ξ 3 ( t ) = ( 2 3 ) t δ ( ξ 3 ( 0 ) 4 ξ 2 ( 0 ) 6 ξ 1 ( 0 ) + 18 i ) + + ( 3 4 ) t δ ( 4 ξ 2 ( 0 ) + 4 ξ 1 ( 0 ) 16 i ) + ( 1 2 ) t δ ( 2 ξ 1 ( 0 ) 4 i ) + 2 i .
Please note that this formula is valid for any t R , hence we could also extend the solutions (5) to the continuous-time case.
In the Figure 2, we show these trajectories for i = 0 (left panel) and i = 1 (right panel).
If we assume that i I is constant, then the system (1) takes the form (4) which can be rewritten in the following form:
ξ 1 ( t + δ ) = R i ( C + μ P R 1 ) ξ 1 ( t ) ξ 2 ( t + δ ) = C ξ 1 ( t ) ( μ R 1 ) ξ 2 ( t ) ξ 3 ( t + δ ) = P ξ 2 ( t ) ( μ P 1 ) ξ 3 ( t ) ,
with the initial condition ξ ¯ = ( ξ 1 ( 0 ) , ξ 2 ( 0 ) , ξ 3 ( 0 ) ) .
Remark 1.
We assumed that i is constant, but important is to explain the way our process behaves after the next switch.
For the purpose of calculation of ( ξ 1 ( t + δ ) , ξ 2 ( t + δ ) , ξ 3 ( t + δ ) ) we need to use the value of i ( t ) , not i ( t + δ ) , in consistency with the formula (1).
If (3) hold and C + μ P R μ R , μ R μ P , μ P C + μ P R , then the solutions of the system (4) are:
ξ 1 ( t ) = ( 1 C μ P R ) t δ ξ 1 ( 0 ) R C + μ P R i + R C + μ P R i , ξ 2 ( t ) = ( 1 μ R ) t δ ξ 2 ( 0 ) + C C + μ P R μ R ξ 1 ( 0 ) R C ( C + μ P R μ R ) μ R i + + ( 1 C μ P R ) t δ C C + μ P R μ R ξ 1 ( 0 ) + R C ( C + μ P R μ R ) ( C + μ P R ) i + R C ( C + μ P R ) μ R i , ξ 3 ( t ) = ( 1 μ P ) t δ × × ξ 3 ( 0 ) + P μ R μ P ξ 2 ( 0 ) + P C ( μ R μ P ) ( C + μ P R μ P ) ξ 1 ( 0 ) P C R ( μ R μ P ) ( C + μ P R μ P ) μ P i + + ( 1 μ R ) t δ × × P μ R μ P ξ 2 ( 0 ) P C ( μ R μ P ) ( C + μ P R μ R ) ξ 1 ( 0 ) + P C R ( C + μ P R μ R ) ( μ R μ P ) μ R i + + ( 1 C μ P R ) t δ × × P C ( C + μ P R μ R ) ( C + μ P R μ P ) ξ 1 ( 0 ) P C R ( C + μ P R μ R ) ( C + μ P R μ P ) ( C + μ P R ) i + + P C R ( C + μ P R ) μ R μ P i .
We can extend this formula from t δ Z , t 0 to t [ 0 , ) , since the formula (7) is valid not only for t δ Z , t 0 but also for t [ 0 , ) .
Using the formula (7) we denote by
π i ( t , ξ ¯ ) = ( ξ 1 ( t ) , ξ 2 ( t ) , ξ 3 ( t ) )
the solutions of the system (6), where we assume that t [ 0 , ) . Also note that:
π 0 ( t , w ξ ¯ ) = w π 1 ( t , ξ ¯ ) ,
where w = R C + μ P R , R C ( C + μ P R ) μ R , P C R ( C + μ P R ) μ R μ P . One can rewrite (9) in the following form:
π 1 ( t , ξ ¯ ) = w π 0 ( t , w ) + π 0 ( t , ξ ¯ ) ,
see also [17]. For the rest of the paper we assume C + μ P R μ R , μ R μ P and μ P C + μ P R (to avoid “degenerative” cases).
After substituting a = 1 C μ P R , b = 1 μ R , c = 1 μ P , where a , b , c ( 0 , 1 ) , a b , b c and c a in the system (6) and taking:
ξ 1 ( t ) = R ( a c ) ( a b ) ξ 1 * ( t ) ξ 2 ( t ) = C R ( a c ) ξ 1 * ( t ) + C R ( b c ) ξ 2 * ( t ) ξ 3 ( t ) = P C R ( ξ 1 * ( t ) + ξ 2 * ( t ) + ξ 3 * ( t ) ) ,
we obtain equivalent system of difference equations:
ξ 1 * ( t + δ ) = a ξ 1 * ( t ) + 1 ( a c ) ( a b ) i ξ 2 * ( t + δ ) = b ξ 2 * ( t ) + 1 ( b a ) ( b c ) i ξ 3 * ( t + δ ) = c ξ 3 * ( t ) + 1 ( c a ) ( c b ) i ,
with initial condition ξ ¯ = ( ξ 1 ( 0 ) , ξ 2 ( 0 ) , ξ 3 ( 0 ) ) R 3 . We will return to system (12) in Section 4 and Section 10.

3.1. Life-Span Function

Let f be a function defined on the set of non-negative integers Z 0 with values in R d , d N . We define Δ f ( k ) = f ( k + 1 ) f ( k ) . Analogically to description from [20] we investigate the following system of equations:
Δ x ( t ) = g ( x ( t ) ) , where x : Z 0 R d and g : R d R d .
Let t 0 = 0 and t n be a time when the process changes its state n th time, t n Z 0 , t n + 1 > t n . Let x ( t ) , t [ t n 1 , t n ) Z 0 = { t n 1 , t n 1 + 1 , , t n 1 } be a discrete trajectory of the process from time t n 1 to t n . Let π ( t , x 0 ) be a solution of the Equation (13) with the initial condition x ( 0 ) = x 0 . Now we define q ( x ) as the intensity function with parameter x which means that after small fixed natural time Δ t our process changes its state with probability q ( x ) Δ t . Let B be a Borel subset of R 3 and
P ( x , B ) = P r o b ( x ( t n ) B | x ( t n 1 ) = x ) .
For any n > 0 the distribution function of the difference t n t n 1 is given by F ( t ) = 1 Φ x 0 ( t ) , where Φ x 0 ( t ) = P r o b ( t n t n 1 > t ) is a survival function, i.e., the probability of duration between consecutive changes of states by the process.
Please note that Φ x 0 ( 0 ) = 1 , F ( 0 ) = 0 . If n = 1 , then Φ x 0 ( t ) is a probability that the process will change its state for the first time after time t . Then we have
P r o b ( t t 1 t + Δ t | t 1 > t ) = Φ x 0 ( t ) Φ x 0 ( t + Δ t ) Φ x 0 ( t ) = q ( π ( t , x 0 ) ) Δ t .
Hence, by taking Δ t = 1 we obtain the following formulas:
Φ x 0 ( t + 1 ) Φ x 0 ( t ) = 1 q ( π ( t , x 0 ) ) , and Φ x 0 ( t ) = s = 0 t 1 ( 1 q ( π ( s , x 0 ) ) ) .
Therefore,
Φ x 0 ( t ) = exp { s = 0 t 1 log ( 1 q ( π ( s , x 0 ) ) ) } exp { s = 0 t 1 q ( π ( s , x 0 ) ) } ,
assuming q ( π ( s , x 0 ) ) lies in the sufficiently small neighborhood of zero, since lim h 0 1 h ( log ( 1 + h ) ) = 1 . Similar formula has been derived in the continuous case (see 1.7 [20]), but in the Formula (17) we use sum instead of integral operator. Above considerations are justified by using the following definition.
Definition 1.
We define life-span function by the following formula:
Φ x 0 ( t ) = exp { s = 0 t 1 q ( π ( s , x 0 ) ) } ,
where q : R d R 0 is a bounded switching intensity function and t is a non-negative integer number. In our case, if t R we can take
Φ x 0 ( t ) = exp { s = 0 t 1 q ( π ( s , x 0 ) ) } .
Hence, instead of exp { 0 t q ( π ( s , x 0 ) ) d s } in the continuous case (the formula used in the paper [17]), we justify the formula for the life-span function exp { s = 0 t 1 q ( π ( s , x 0 ) ) } in the discrete case.

3.2. Piecewise Deterministic Markov Process

In this subsection we introduce basic characteristics of the Markov process represented by the system (4) that will be needed for further considerations. Here, we assume that t [ 0 , ) . Let q 0 ( ξ 1 , ξ 2 , ξ 3 ) and q 1 ( ξ 1 , ξ 2 , ξ 3 ) be positive and continuous functions on the set R 3 . Let ξ = ( ξ 1 , ξ 2 , ξ 3 ) R 3 . Using our Definition 1 of the life-span function, we can define the distribution function of the difference t n t n 1 , namely
F ξ , i ( t ) = 1 exp { s = 0 t 1 q i ( π i ( s , ξ ) ) } ,
where as before, t n is a time when the process changes its state n th time. Please note that F ξ , i ( 0 ) = 0 .
The explicit expressions for the solutions π i ( t , ξ ) , t [ 0 , ) of the system (4) were found in (7). Hence,
lim t ( π i ( t , ξ ) ) = R C + μ P R i , R C ( C + μ P R ) μ R i , P C R ( C + μ P R ) μ R μ P i = w i ,
for the arbitrary choice of ξ . It is known [20] that such description gives us piecewise deterministic Markov process
X t = ( ξ 1 ( t ) , ξ 2 ( t ) , ξ 3 ( t ) , i ( t ) )
on the state space 0 , R C + μ P R × 0 , R C ( C + μ P R ) μ R × 0 , P C R ( C + μ P R ) μ R μ P × { 0 , 1 } with two switching intensity functions q 0 , q 1 and the transition measure given by Dirac Delta Function concentrated at the point ( x 1 , x 2 , x 3 , 1 i ) . Please note that by the definition of the system (1), the set 0 , R C + μ P R × 0 , R C ( C + μ P R ) μ R × 0 , P C R ( C + μ P R ) μ R μ P × { 0 , 1 } , is invariant with respect to the process X t , i.e., if
X 0 0 , R C + μ P R × 0 , R C ( C + μ P R ) μ R × 0 , P C R ( C + μ P R ) μ R μ P × { 0 , 1 } ,
then
X t 0 , R C + μ P R × 0 , R C ( C + μ P R ) μ R × 0 , P C R ( C + μ P R ) μ R μ P × { 0 , 1 }
for all t [ 0 , ) .
The technical proof of this fact, which is based on the usage of formulas (7), is omitted. In the fourth chapter, we introduce Iterated Function Systems to investigate the existence of invariant measure and its support.

4. Iterated Function System

For i I we define the mappings S i : R 3 R 3 given by the formulas
S i ( x , y , z ) = a x + 1 ( a c ) ( a b ) i , b y + 1 ( b a ) ( b c ) i , c z + 1 ( c a ) ( c b ) i .
We can reformulate then the system (12) in the form
( ξ 1 * ( t + δ ) , ξ 2 * ( t + δ ) , ξ 3 * ( t + δ ) ) = S i ( ξ 1 * ( t ) , ξ 2 * ( t ) , ξ 3 * ( t ) ) ,
where a , b , c ( 0 , 1 ) , a b , b c , c a .
The family { S 0 , S 1 : R 3 R 3 } is an iterated function system if for every i I the mapping S i is a contraction on the complete Euclidean metric space ( R 3 , | · | ) .
We can see that
| S 0 ( x , y , z ) S 0 ( x , y , z ) | = | S 1 ( x , y , z ) S 1 ( x , y , z ) | = a 2 ( x x ) 2 + b 2 ( y y ) 2 + c 2 ( z z ) 2 max { a , b , c } ( x x ) 2 + ( y y ) 2 + ( z z ) 2 = max { a , b , c } | ( x , y , z ) ( x , y , z ) | .
Hence the mapping S i : R 3 R 3 is a contraction with the constant equal to max { a , b , c } < 1 .
Definition 2.
Let { S i : R 3 R 3 : i I } be an iterated function system. We define the operator S on the set A R 3 by the formula S ( A ) : = S 0 ( A ) S 1 ( A ) .
The transformation S introduced above corresponds to the function (30) in the model from [17]. We will describe an invariant compact set K such that K = S ( K ) .
Remark 2.
In the paper [21] it was shown that for the metric space R n an iterated function system has a unique non-empty compact fixed set K such that K = S ( K ) = S 0 ( K ) S 1 ( K ) . One way of generating such set K is to start with a compact set A 0 R 3 (which can contain a single point, called a seed) and iterate the mapping S using the formula A n + 1 = S ( A n ) = S 0 ( A n ) S 1 ( A n ) . This iteration converges to the attractor K = lim n A n , i.e., the distance between K and A n converges to 0 in the Hausdorff metric, see [21].
Another way to generate some fractal objects was presented by Barnsley in [22]. The set of such points is called an IFS-attractor. In our case, an example of the attractor is shown in Figure 3, see also Section 10. The source code has been added to GitHub [19].

5. Iterated Function Systems with Place-Dependent Probabilities

In this section, similarly to the paper [23], we provide a description of IFS generated by the family of mappings S 0 , S 1 with p i ( x ) , x R 3 , i { 0 , 1 } being a probability of a choice of a mapping S i . We assume that t Z 0 . Let S 0 , S 1 : R 3 R 3 be two Borel measurable non-singular functions, while let p 0 ( x ) , p 1 ( x ) be two non-negative Borel measurable functions such that x R 3 p 0 ( x ) + p 1 ( x ) = 1 .
If x R 3 and B R 3 is a Borel subset, then the transition probability from x to B is defined by
P ( x , B ) = p 0 ( x ) 1 l B ( S 0 ( x ) ) + p 1 ( x ) 1 l B ( S 1 ( x ) ) ,
where 1 l B is the indicator function of the set B. We can define the mapping
( T g ) ( x ) = R 3 g ( y ) P ( x , d y ) = p 0 ( x ) g ( S 0 ( x ) ) + p 1 ( x ) g ( S 1 ( x ) ) ,
where T is a Markov operator on space of the bounded Borel measurable real-valued functions (which forms the Banach space with the supremum norm). Then T 1 l B = P ( x , B ) . Let M ( R 3 ) = { ν | ν : B ( R 3 ) R , ν ( ) = 0 , ν is σ additive } be the space of finite signed Borel measures on R 3 . By P ( R 3 ) M ( R 3 ) we denote the set of all probability measures from M ( R 3 ) .
We define the operator F : M ( R 3 ) ν F ν M ( R 3 ) by the formula
F ν ( B ) = R 3 P ( x , B ) d ν ( x ) = S 0 1 ( B ) p 0 ( x ) d ν ( x ) + S 1 1 ( B ) p 1 ( x ) d ν ( x ) = B P S 0 p 0 ( x ) d ν ( x ) + B P S 1 p 1 ( x ) d ν ( x ) ,
showing how a probability distribution ν on X of the process is transformed in one step. Here, the operators P S 0 , P S 1 are classical Frobenius–Perron operators for the transformation S 0 , S 1 , respectively (see [20], Section 2.1.5). Let C ( R 3 ) be the set of bounded real-valued continuous functions on R 3 . A Borel invariant probability measure μ (i.e., F μ = μ ) is called attractive iff for all ν P ( R 3 ) and for all f C ( R 3 ) we have lim n f d ( F n ν ) = f d μ . In other words, that means F n ν converges to μ in distribution. For the rest of the section, we will use the theory of Markov processes (see p. 369 in [18]) to describe this IFS. Let ( X t μ ) be the Markov process with initial distribution equal to μ P ( R 3 ) and transition probability P ( x , B ) from point x to Borel subset B R 3 . If μ is a Dirac measure concentrated at x 0 , then we denote the process ( X t x 0 ) . A transition probability P provides the following interpretation. We have P ( x , B ) = P ( X 1 x 0 B | X 0 x 0 = x ) . If X 0 μ has a distribution μ P ( R 3 ) , then F μ is the distribution of X 1 μ which means that P ( X 1 μ B ) = F μ ( B ) . It is known that
T f ( x 0 ) = R 3 f ( y ) P ( x 0 , d y ) = E ( f ( X 1 x 0 ) | X 0 x 0 = x 0 ) = E f ( X 1 x 0 ) ,
where f is a bounded Borel measurable real-valued function.
Hence, E f ( X 1 x 0 ) = p 0 ( x ) f ( S 0 ( x ) ) + p 1 ( x ) f ( S 1 ( x ) ) . In the next section we investigate long-term behavior of the process ( X t μ ) .

6. Convergence of the System to Invariant Measure

In this section we assume that t Z 0 . In classical work [18], Barnsley considered a discrete-time Markov process ( X t μ ) on a locally compact metric space X obtained by a family of randomly iterating Lipschitz maps S 0 , , S n , n N . For any i the probability of choosing map S i at each step is given by p i ( x ) . Assume that:
  • Sets of finite diameter in X have compact closure.
  • For any i the mappings S i are average-contractive, i.e., i = 1 N p i ( x ) log d ( S i ( x ) , S i ( y ) ) d ( x , y ) < 0 , uniformly in x and y, (for details see paper [18]).
  • δ 0 > 0 x X p i ( x ) δ 0 .
  • For every i the mappings p i ( x ) are Hölder continuous.
Under these assumptions, the Markov process ( X t ) converges in distribution to a unique invariant measure. In our regime, we can formulate a weaker version of the theorem above (see also [18], p. 372).
Theorem 1.
Let ( X t μ ) be a Markov process on the space R 3 × { 0 , 1 } . We assume that the initial distribution of this process is given by μ P ( R 3 ) and its transition probability is given by (26). Let the probability p i ( x ) of choosing contractive map S i at each step be Hölder continuous function and moreover
δ 0 > 0 x R 3 p i ( x ) δ 0 .
Then the Markov process ( X t ) converges in distribution to a unique invariant measure when t .
To illustrate this theorem, we will investigate transition probability in the case of the stochastic process ( X t ) , see (22). We assume that the state space of our process is R 3 × { 0 , 1 } . For j { 0 , 1 } we define the jump transformation R j : R 3 × { 0 , 1 } R 3 × { 0 , 1 } by the formula R j ( x , i ) = ( x , j ) .
Each jump transformation R 0 , R 1 , defined on the state space R 3 × { 0 , 1 } is non-singular with respect to the product measure μ of the Lebesgue measure on R 3 and the counting measure on the set { 0 , 1 } . We define the positive and continuous jump intensity rate functions by the formulas q 0 = q 0 ( ξ 1 , ξ 2 , ξ 3 ) and q 1 = q 1 ( ξ 1 , ξ 2 , ξ 3 ) on R 3 . Here, q i is the jump intensity rate from the state i to the state 1 i , where i { 0 , 1 } see Figure 1. Let p j ( x , i ) = p j ( x ) . The following equation holds:
P ( ( x , i ) , { ( x , j ) } ) = P ( X 1 = ( x , j ) | X 0 = ( x , i ) ) = p j ( x ) .
Please note that P ( ( x , 0 ) , { ( x , j ) } ) = P ( ( x , 1 ) , { ( x , j ) } ) = p j ( x ) .
Let S 0 , S 1 : R 3 R 3 be two Borel measurable non-singular functions. If x R 3 and B R 3 is a Borel subset, then the transition probability is defined by:
P ( ( x , i ) , B × { 0 , 1 } ) = P ( X 1 B × { 0 , 1 } | X 0 = ( x , i ) ) = p 0 ( x ) 1 l B ( S 0 ( x ) ) + p 1 ( x ) 1 l B ( S 1 ( x ) ) > δ 0 .
Please note that X 1 { ( S 0 ( x ) , 0 ) , ( S 1 ( x ) , 1 ) } .
Assume now that the initial distribution of the process ( X t ) is given by μ P ( R 3 ) and its transition probability is given by (30). The process ( X t ) is both Markov and IFS such that the probability of random choice of one of two functions S 0 , S 1 depends on the space part of a state. By Theorem 1 the Markov process ( X t ) converges in distribution to a unique invariant measure when t .

7. Properties of an Invariant Measure

In this section we assume that t Z 0 (for the sake of simplicity, we assume here that δ = 1 ). By Theorem 1, we know that the process ( X t ) converges in distribution to a unique invariant measure. A classical result of Hutchinson [21] states that there exists a unique non-empty compact set K such that K = S ( K ) = S 0 ( K ) S 1 ( K ) .
Theorem 2.
Consider the stochastic process ( X t μ ) such that X 0 μ = x R 3 with μ P ( R 3 ) and transition probability given by (30). Then
inf { d ( X t μ , y ) : y K } 0 ,
where d is the Euclidean distance in R 3 .
Proof. 
For any set A R 3 we denote S 0 ( A ) = A , S 1 ( A ) = S ( A ) and S p ( A ) = S ( S p 1 ( A ) ) for p 2 . Consider A = { x } . From the Theorem 3.1 (Ch. 3, p. viii) of [21] it follows then S p ( A ) converges to K in the Hausdorff metric uniformly when p . Using our notion, inf { d ( S i 0 S i 1 S i t ( x ) , y ) : y K } converges to 0 uniformly when t . Please note that S i 0 S i 1 S i t ( x ) is a trajectory of our process which depends on the probabilities p 0 , p 1 , see (30). Hence, we get inf { d ( X t μ , y ) : y K } 0 , since the choice of x R 3 was arbitrary.□
Moreover, if X t μ K , then X t + 1 μ { S 0 ( X t μ ) , S 1 ( X t μ ) } S 0 ( K ) S 1 ( K ) = K . Hence, K is an invariant set for this process.

8. Jump Distribution

Remark 3.
Let t [ 0 , ) . With an analogy to the description of PDMP in the book [14], we will define the function F ξ , i as a cumulative distribution function of the first jump t 1 of the process ( X t ) which starts at t = 0 at some point ( ξ , i ) R 3 × { 0 , 1 } . Let F ξ , i ( t ) : = P r o b { t 1 t } and we define then the process on the random interval [ 0 , t 1 ] as follows:
X t = ( π i ( t , ξ ) , i ) , t < t 1 ; ( π i ( t , ξ ) , 1 i ) t = t 1 .
After time t 1 the process ( X t ) starts again, but with new initial condition equal to X ( t 1 ) .
This process evolves with respect to the points obtained by the solution (12) with given value of i until time of the next jump t 2 . Then, this step repeats infinitely many times. Please note that
P r o b ( t 1 > t ) = 1 F ξ , i ( t ) = exp { s = 0 t 1 q i ( π i ( s , ξ ) ) } .
Hence, P r o b ( t 1 > t ) > 0 for all t 0 , because q i is a bounded function.
Also, P r o b ( t 1 > t ) < 1 for all t 1 , because q i is positive function.
Hence, t 1 > 0 . Analogically, Δ t k 1 = t k t k 1 > 0 for all k 1 , where t 0 = 0 . All these considerations are true with the probability being equal to 1 .
Let μ = max y [ 0 , 2 ] 3 , i { 0 , 1 } q i ( y ) . Next, by (20) we get F ξ , i ( t ) 1 e x p { μ t } for all y [ 0 , 2 ] 3 . Please note that P r o b ( Δ t k 1 ) 1 exp { μ t } , independently from the values of Δ i , where 0 i k 2 . Hence, P r o b ( T k t ) ( 1 exp ( μ t ) ) k .
Therefore, lim k T k = . We also get that
E ( max { k 0 , T k < t } ) = k = 0 P r o b ( N t k ) = k = 0 P r o b ( T k < t ) k = 0 ( 1 exp ( μ t ) ) k = exp ( μ t ) < ,
where t > 0 . Please note that E ( max { k 0 , T k < t } ) is the expected value of the number of jumps of our process up to the time t .
Now we will gather all the facts about the process ( X t ) considered in this paper.
Definition of the process
1. 
Denote the state space R 3 × { 0 , 1 } .
2. 
According to the reaction scheme Figure 1, the reactions which occur in our process are as follows:
Inactive q 0 ( ξ 1 , ξ 2 , ξ 3 ) Active , Inactive q 1 ( ξ 1 , ξ 2 , ξ 3 ) Active ,
Outcome (A)
A c t i v e with probability p 0 ( ξ 1 , ξ 2 , ξ 3 ) R pre mRNA C + μ P R C + μ P R Degeneration or pre mRNA conversion to mRNA ,
Outcome (B)
I n a c t i v e with probability p 1 ( ξ 1 , ξ 2 , ξ 3 ) 0 pre mRNA C + μ P R C + μ P R Degeneration or pre mRNA conversion to mRNA ,
Outcome (A) and (B)
pre mRNA C mRNA μ R Degeneration , mRNA P Protein μ P Degeneration ,
where ( ξ 1 ( t ) , ξ 2 ( t ) , ξ 3 ( t ) ) is the concentration level of all the substances at time t.
Consider the simplified version of this system (12) with S 0 , S 1 : R 3 R 3 being two Borel measurable non-singular functions defined by (23).
3. 
Let p 0 ( x ) , p 1 ( x ) be two non-negative Borel measurable functions such that
x R 3 p 0 ( x ) + p 1 ( x ) = 1 .
4. 
In addition, let μ P ( R 3 ) and q 0 and q 1 be two non-negative functions defined on R 3 .
5. 
From now, by π i ( t , ξ ¯ ) we denote the solutions of the system (12), i.e.,
π i ( t , ξ ¯ ) = ( ξ 1 * ( t ) , ξ 2 * ( t ) , ξ 3 * ( t ) ) .
Despite the fact that we consider discrete-time Markov process, we can assume that t [ 0 , ) (see comment above Equation (8)). We consider two cases, where i = 0 or i = 1 , which corresponds to the functions S 0 and S 1 , respectively.
6. 
Let ( X n μ ) n = 0 be a Markov process on the space R 3 × { 0 , 1 } with initial distribution of the process given by μ P ( R 3 ) and its transition probability is given by (30).
7. 
Here, X 1 { ( S 0 ( x ) , 0 ) , ( S 1 ( x ) , 1 ) } .
8. 
( X n μ ) n = 0 is both Markov process and IFS such that the probability of random choice of one of two functions S 0 , S 1 depends on the space part of a state.
9. 
With an analogy to the description in the book [14], we define the function F ξ , i as a cumulative distribution function of the first jump t 1 of our process ( X t ) which starts at t = 0 at some point ( ξ , i ) R 3 × { 0 , 1 } .
10. 
We say that P r o b ( t 1 t ) = F ξ , i ( t ) and we define then the process on the random interval [ 0 , t 1 ] as follows:
X t = ( π i ( t , ξ ) , i ) , t < t 1 ; ( π i ( t , ξ ) , 1 i ) t = t 1 .
11. 
After time t 1 we start the process X again, but with new initial conditions being equal to X ( t 1 ) . This process evolves with respect to the points obtained by the solution (12) with given value of i till time of the next jump t 2 . Then, we repeat this step infinitely many times. Since P r o b ( t 1 > t ) = 1 F ξ , i ( t ) = exp { s = 0 t 1 q i ( π i ( s , ξ ) ) } .
12. 
From the definition of the process ( X t ) both of the intensity functions q 0 and q 1 depend on two non-negative Borel measurable functions p 0 ( x ) , p 1 ( x ) .
Summary of the properties of the process ( X t ) is both Markov process and IFS such that the probability of random choice of one of two functions S 0 , S 1 depends on the space part of a state. By Theorem 1 the Markov process ( X t ) converges in distribution to a unique invariant measure when n . This theorem means that the trajectories of this process after sufficiently long time are arbitrarily close to K independent from the probability distribution. In addition, if X n μ K , then X n + 1 μ { S 0 ( X n μ ) , S 1 ( X n μ ) } S 0 ( K ) S 1 ( K ) = K . Hence, K is invariant. It is worth noting that the life-span function of the process is equal to exp { s = 0 t 1 q ( π ( s , x 0 ) ) } , unlike the continuous case studied in [17].

9. Stochastic Simulations

To visualize the behavior of the stochastic process (4), we performed stochastic simulation of the process (Figure 4). The code was developed in Python (3.7.4). The parameter values are δ = 1 , R = 1 , μ P R = 1 4 , C = μ R = 1 4 , P = μ P = 1 3 with Borel measurable probability functions p 0 ( x ) = 1 2 ( 1 + | x | 2 ) and p 1 ( x ) = 1 p 0 ( x ) = 1 + 2 | x | 2 2 ( 1 + | x | 2 ) and initial conditions x ( 0 ) = y ( 0 ) = z ( 0 ) = 1 2 .

10. The Derivation of the Formula for the Attractor

We consider the system which simplifies both systems (1) and (6), namely (12):
ξ 1 * ( t + δ ) = a ξ 1 * ( t ) + 1 ( a c ) ( a b ) i ξ 2 * ( t + δ ) = b ξ 2 * ( t ) + 1 ( b a ) ( b c ) i ξ 3 * ( t + δ ) = c ξ 3 * ( t ) + 1 ( c a ) ( c b ) i ,
with the initial condition ξ ¯ = ( ξ 1 ( 0 ) , ξ 2 ( 0 ) , ξ 3 ( 0 ) ) R 3 . We also assume that the values of the parameters a , b , c ( 0 , 1 ) are pairwise distinct.
In the case of δ = 1 , we will find a set for the process described by the system (36), i.e., the smallest invariant set for the process, for which almost all trajectories of the process enter in a finite time.
Remark 4.
Let us observe that if we consider only integer values of t then the attractor generated by the composition of the systems (36) is a discrete set (see Figure 3 for a = 1 2 , b = 3 4 , c = 2 3 ) and it is contained inside the attractor obtained for real values t . Hence, now we only proceed with real values of t .
Let x = ( x 1 , x 2 , x 3 ) . Let π t i ( x ) = ( ξ 1 * ( t ) , ξ 2 * ( t ) , ξ 3 * ( t ) ) denote the solutions of (36) at time t with the initial condition x . Namely
π t i ( x ) = a t x 1 v 1 i + v 1 i , b t x 2 v 2 i + v 2 i , c t x 3 v 3 i + v 3 i ,
where by v we denote the vector
( v 1 , v 2 , v 3 ) = 1 ( a c ) ( a b ) ( 1 a ) , 1 ( b a ) ( b c ) ( 1 b ) , 1 ( c a ) ( c b ) ( 1 c ) .
We obtain
π t 0 ( x ) = ( a t x 1 , b t x 2 , c t x 3 ) , π t 1 ( x ) = v + π t 0 ( x ) π t 0 ( v ) .
This gives us the following formulas:
π t 2 1 π t 1 0 ( x ) = v + π t 1 + t 2 0 ( x ) π t 2 0 ( v ) , π t 2 0 π t 1 1 ( x ) = π t 2 0 ( v ) + π t 1 + t 2 0 ( x ) π t 1 + t 2 0 ( v )
for all times t 1 , t 2 0 . Hence
π t 3 0 π t 2 1 π t 1 0 ( x ) = π t 3 0 ( v ) + π t 1 + t 2 + t 3 0 ( x ) π t 2 + t 3 0 ( v ) , π t 3 1 π t 2 0 π t 1 1 ( x ) = v + π t 2 + t 3 0 ( v ) + π t 1 + t 2 + t 3 0 ( x ) π t 1 + t 2 + t 3 0 ( v ) π t 3 0 ( v ) .
Using the formulas (38) we get
π t 2 0 π t 1 1 ( x ) = ( a t 2 v 1 + a t 1 + t 2 ( x 1 v 1 ) , b t 2 v 2 + b t 1 + t 2 ( x 2 v 2 ) , c t 2 v 3 + c t 1 + t 2 ( x 3 v 3 ) ) ,
π t 2 1 π t 1 0 ( x ) = ( v 1 a t 2 v 1 + a t 1 + t 2 x 1 , v 2 b t 2 v 2 + b t 1 + t 2 x 2 , v 3 c t 2 v 3 + c t 1 + t 2 x 3 ) .
If t 1 , t 2 [ 0 , ) , we can assume α : = a t 2 , β : = a t 1 + t 2 . Hence,
π t 2 0 π t 1 1 ( x ) = ( α v 1 + β ( x 1 v 1 ) , α log a b v 2 + β log a b ( x 2 v 2 ) , α log a c v 3 + β log a c ( x 3 v 3 ) ) ,
π t 2 1 π t 1 0 ( x ) = ( v 1 α v 1 + β x 1 , v 2 α log a b v 2 + β log a b x 2 , v 3 α log a c v 3 + β log a c x 3 ) ,
where 1 α β > 0 . These equations are similar to the ones obtained in the continuous case [17], therefore the attractor will adopt analogous form as in that case.
Taking as the initial points x = ( 0 , 0 , 0 ) in the Formula (41) and x = ( v 1 , v 2 , v 3 ) in the Formula (42), we get a parametric equations for the surfaces A 0 and A 1 which we will found out as the boundaries of attractor A :
A 0 = { ( α β ) v 1 , ( α log a b β log a b ) v 2 , ( α log a c β log a c ) v 3 ) } , A 1 = { ( v 1 ( 1 α + β ) , v 2 ( 1 α log a b + β log a b ) , v 3 ( 1 α log a c + β log a c ) ) } .
Please note that both sets are symmetric to each other with respect to the point ( v 1 2 , v 2 2 , v 3 2 ) , since ( v 1 , v 2 , v 3 ) A 0 = A 1 . This means that the boundary of A (and so is the attractor A ) is symmetric to itself with respect to the point ( v 1 2 , v 2 2 , v 3 2 ) . Moreover, it can be shown that
( v 1 , v 2 , v 3 ) A = A .
Now we are going to describe the attractor A . It appears that two changes of i are sufficient to get to any arbitrary point in A . The composition of three flows π t 3 1 π t 2 0 π t 1 1 and π t 3 0 π t 2 1 π t 1 0 is given by the following formulas:
π t 3 1 π t 2 0 π t 1 1 ( x ) = ( v 1 ( 1 a t 3 + a t 2 + t 3 a t 1 + t 2 + t 3 ) + a t 1 + t 2 + t 3 x 1 , v 2 ( 1 b t 3 + b t 2 + t 3 b t 1 + t 2 + t 3 ) + b t 1 + t 2 + t 3 x 2 , v 3 ( 1 c t 3 + c t 2 + t 3 c t 1 + t 2 + t 3 ) + c t 1 + t 2 + t 3 x 3 ) ,
π t 3 0 π t 2 1 π t 1 0 ( x ) = ( v 1 ( a t 3 a t 2 + t 3 ) + a t 1 + t 2 + t 3 x 1 , v 2 ( b t 3 b t 2 + t 3 ) + b t 1 + t 2 + t 3 x 2 , v 3 ( c t 3 c t 2 + t 3 ) + c t 1 + t 2 + t 3 x 3 ) .
Figure 5 presents trajectories of the processes (44) and (45), where t 1 , t 2 , t 3 are drawn from uniform distribution on the interval ( 0 , 100 ) . Both show the contour of the attractor A . For parameters chosen to create Figure 5, the density of colors intensity (i.e., red intensity, blue intensity) and Equations (39) and (40) may suggest bistability (in the sense of bimodality of the stationary distribution, see [17]). We are convinced that there is a need for further research about bistability in a discrete case. Please note that for deterministic linear systems, bistability cannot hold, hence such phenomenon in a stochastic linear system would be interesting.
We will start with description of the set, which we can reach in two changes of i . In analogy to the above, in the case of double superposition, we define α , β , γ in a new way. If t 1 , t 2 , t 3 [ 0 , ) , we can assume α : = a t 3 , β : = a t 2 + t 3 , γ : = a t 1 + t 2 + t 3 and hence we get equations:
π t 3 1 π t 2 0 π t 1 1 ( x ) = ( v 1 ( 1 α + β γ ) + γ x 1 , v 2 ( 1 α log a b + β log a b γ log a b ) + γ log a b x 2 , v 3 ( 1 α log a c + β log a c γ log a b ) + γ log a c x 3 ) ,
π t 3 0 π t 2 1 π t 1 0 ( x ) = ( v 1 ( α β ) + γ x 1 , v 2 ( α log a b β log a b ) + γ log a b x 2 , v 3 ( α log a c β log a c ) + γ log a c x 3 ) ,
where 1 α β γ > 0 and
( v 1 , v 2 , v 3 ) = 1 ( a c ) ( a b ) ( 1 a ) , 1 ( b a ) ( b c ) ( 1 b ) , 1 ( c a ) ( c b ) ( 1 c ) .
We can assume that 1 α β γ 0 because if γ = 0 in Equations (46) and (47) then we get:
π t 3 1 π t 2 0 π t 1 1 ( x ) = ( v 1 ( 1 α + β ) , v 2 ( 1 α log a b + β log a b ) , v 3 ( 1 α log a c + β log a c ) ) ,
π t 3 0 π t 2 1 π t 1 0 ( x ) = ( v 1 ( α β ) , v 2 ( α log a b β log a b ) , v 3 ( α log a c β log a c ) ) ,
(the values of above states can be also obtained taking γ > 0 and after appropriate substitution to α , β , ( x 1 , x 2 , x 3 ) ). Please note that these states belong correspondingly to the boundaries A 1 and A 0 . Hence γ = 0 is a case when the trajectory is on the boundary of the attractor A .
Let
A = { ( φ a , b , c ( x , y , z ) , χ a , b , c ( x , y , z ) , ψ a , b , c ( x , y , z ) ) : 1 x y z 0 } ,
where
φ a , b , c ( x , y , z ) : = 1 ( a c ) ( a b ) ( 1 a ) ( x y + z ) , χ a , b , c ( x , y , z ) : = 1 ( b a ) ( b c ) ( 1 b ) ( x log a b y log a b + z log a b ) , ψ a , b , c ( x , y , z ) : = 1 ( c a ) ( c b ) ( 1 c ) ( x log a c y log a c + z log a c ) .
The set A consist of all points from (47), where we take ( x 1 , x 2 , x 3 ) = ( v 1 , v 2 , v 3 ) .
Equivalently using the Equation (46) we get an alternative formula for the set A .
A = { ( φ a , b , c ( x , y , z ) , χ a , b , c ( x , y , z ) , ψ a , b , c ( x , y , z ) ) : 1 x y z 0 } ,
where
φ a , b , c ( x , y , z ) : = 1 ( a c ) ( a b ) ( 1 a ) ( 1 x + y z ) , χ a , b , c ( x , y , z ) : = 1 ( b a ) ( b c ) ( 1 b ) ( 1 x log a b + y log a b z log a b ) , ψ a , b , c ( x , y , z ) : = 1 ( c a ) ( c b ) ( 1 c ) ( 1 x log a c + y log a c z log a c ) .
In the light of Equation (43), descriptions (48) and (51) are equivalent. Analogically to the description of (48), we provide a plot of an attractor in the case of description (50).
For the geometric reasons two Formulas (48) and (50) describe the same set A , see Figure 6 and Figure 7, compare also with Figure 5.
Now, let V = { ( x , y , z ) : 1 > x > y > z > 0 } and f : V R 3 be given as follows:
f ( x , y , z ) = ( φ a , b , c ( x , y , z ) , χ a , b , c ( x , y , z ) , ψ a , b , c ( x , y , z ) ) ,
where we use the notion taken from (49). As with the considerations in Appendix A in the paper [17] we prove that the function f is a local diffeomorphism. Hence f ( V ) = { x = π t 3 0 π t 2 1 π t 1 0 ( v ) : t 1 , t 2 , t 3 > 0 } (see Equations (46) and (47)) is an open set. Moreover, f ( V ) is the interior of A . Please note that
A 0 = { ( φ a , b , c ( x , y , z ) , χ a , b , c ( x , y , z ) , ψ a , b , c ( x , y , z ) ) : 1 > x > y > z = 0 } , A 1 = { ( φ a , b , c ( x , y , z ) , χ a , b , c ( x , y , z ) , ψ a , b , c ( x , y , z ) ) : 1 = x > y > z > 0 } , A 0 A 1 = { φ a , b , c ( x , y , z ) , χ a , b , c ( x , y , z ) , ψ a , b , c ( x , y , z ) ) : 1 > x > y = z > 0 } = = { φ a , b , c ( x , y , z ) , χ a , b , c ( x , y , z ) , ψ a , b , c ( x , y , z ) ) : 1 = x = y > z > 0 } .
Hence set A is bounded by the surfaces A 0 , A 1 , which are built from the trajectories of the system (36), where i was switched only once. The set A is indeed the support of stationary distribution when time goes to infinity. For this purpose, it is sufficient to show that:
(1)
after more than two switches the trajectories of the process do not leave A ,
(2)
we cannot find any invariant subset B of A not equal to A . To satisfy the second condition it is sufficient to show that all the states in A communicate with each other, i.e., we can join any two arbitrary states by some trajectory of the process. The proof follows the same lines as in [17], (pp. 31–33).

11. Conclusions

We developed a model of gene expression using IFS with place-dependent probabilities. As a novelty, in this paper, we introduced new formulas for life-span functions, suitable for discrete case. Moreover, we have shown that asymptotic behavior of the model is in line with the results presented in the paper [17]. We have been able to perform extensive numerical simulations and describe a support of the invariant measure of this process. Both continuous-time and discrete-time system are asymptotically stable. We believe further research could find a relationship between supports of respective invariant measures. Fitting suitable values of parameters can allow use of this model along with experimental data obtained in the laboratory conditions, realistically for selected values in some time interval.

Author Contributions

Conceptualization, M.Z. and A.T; methodology, M.Z. and A.T.; software, A.T.; validation, M.Z. and A.T.; formal analysis, M.Z. and A.T.; investigation, M.Z. and A.T.; writing—original draft preparation M.Z. and A.T.; writing—review and editing, M.Z. and A.T.; visualization, M.Z. and A.T.; supervision, M.Z. and A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was partially supported by the National Science Centre (Poland) Grant No. 2017/27/B/ST1/00100.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful to Ryszard Rudnicki (IMPAN Katowice, Poland) for his comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. DiStefano, J., III. Dynamic Systems Biology Modeling and Simulation, 1st ed.; Academic Press: Cambridge, MA, USA, 2013. [Google Scholar]
  2. Smolen, P.; Baxter, D.A.; Byrne, J.H. Mathematical modeling of gene networks. Neuron 2000, 26, 567–580. [Google Scholar] [CrossRef] [Green Version]
  3. Schnoerr, D.; Sanguinetti, G.; Grima, R. Approximation and inference methods for stochastic biochemical kinetics—A tutorial review. J. Phys. A Math. Theor. 2017, 50, 93–101. [Google Scholar] [CrossRef]
  4. Somogyi, R.; Sniegoski, C. Modeling the complexity of genetic networks. Understanding multigenic and pleiotropic regulation. Complexity 1996, 1, 46–53. [Google Scholar] [CrossRef]
  5. May, R.M. Biological Populations Obeying Difference equations: Stable Points, stable cycles, and chaos. J. Theor. Biol. 1975, 51, 511–524. [Google Scholar] [CrossRef]
  6. Lipniacki, T.; Paszek, P.; Marciniak-Czochra, A.; Brasier, A.R.; Kimmel, M. Transcriptional stochasticity in gene expression. J. Theor. Biol. 2006, 238, 348–367. [Google Scholar] [CrossRef] [PubMed]
  7. Grima, R.; Schmidt, T.; Newman, T. Steady-state fluctuations of a genetic feedback loop: An exact solution. J. Chem. Phys. 2012, 137, 35–104. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Cacace, F.; Farina, L.; Germani, A.; Palumbo, P. Discrete-time models for gene transcriptional regulation networks. In Proceedings of the 49th IEEE Conference on Decision and Control, CDC, Atlanta, GA, USA, 15–17 December 2010. [Google Scholar]
  9. Levine, H.A.; Yeon-Jung, S.; Nilsen-Hamilton, M. A discrete dynamical system arising in molecular biology. Discret. Contin. Dyn. Syst. B 2012, 17, 2091–2151. [Google Scholar] [CrossRef]
  10. D’Haeseleer, P.; Wen, X.; Fuhrman, S.; Somogyi, R. Linear modeling of mRNA expression levels during CNS development and injury. In Proceedings of the Pacific Symposium on Biocomputing 1999, Mauna Lani, HI, USA, 4–9 January 1999; pp. 41–52. [Google Scholar]
  11. Song, M.J.; Ouyang, Z.; Liu, Z.L. Discrete Dynamical System Modeling for Gene Regulatory Networks of HMF Tolerance for Ethanologenic Yeast. IET Syst. Biol. 2009, 3, 203–218. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Cobb, M. 60 years ago, Francis Crick changed the logic of biology. PLoS Biol. 2017, 15, e2003243. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Lodish, H.; Berk, A.; Kaiser, C.; Krieger, M.; Bretscher, A.; Ploegh, A.; Amon, A.; Martin, K. Molecular Cell Biology, 8th ed.; W.H. Freeman: New York, NY, USA, 2016. [Google Scholar]
  14. Davis, M.H.A. Piecewise-deterministic Markov processes: A general class of nondiffusion stochastic models. J. R. Stat. Soc. Ser. B 1984, 46, 353–388. [Google Scholar] [CrossRef]
  15. Barany, B. On Iterated Function Systems with place-dependent probabilities. Proc. Am. Math. Soc. 1993, 143, 419–432. [Google Scholar] [CrossRef]
  16. Ladjimi, F.; Peigné, M. Iterated function systems with place dependent probabilities and application to the Diaconis-Friedman’s chain on [0,1]. arXiv 2017, arXiv:1707.07237. [Google Scholar]
  17. Rudnicki, R.; Tomski, A. On a stochastic gene expression with pre-mRNA, mRNA and protein contribution. J. Theor. Biol. 2015, 387, 54–67. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Barnsley, M.F.; Demko, S.G.; Elton, J.H.; Geronimo, J.S. Invariant measures for Markov processes arising from iterated function systems with place-dependent probabilities. Ann. l’Inst. Henri Poincare Probab. Stat. 1988, 24, 367–394. [Google Scholar]
  19. Tomski, A. Stochastic Gene Expression Revisited simulations. Available online: https://github.com/AndrzejTomski/Stochastic_Gene_Expression_Revisited (accessed on 18 April 2021).
  20. Rudnicki, R.; Tyran-Kamińska, M. Piecewise Deterministic Processes in Biological Models, 1st ed.; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar]
  21. Hutchinson, J.; John, E. Fractals and self similarity. Indiana Univ. Math. J. 1981, 30, 713–747. [Google Scholar] [CrossRef]
  22. Barnsley, M.F.; Rising, H. Fractals Everywhere; Academic Press Professional: Boston, MA, USA, 1993. [Google Scholar]
  23. Kwiecińska, A.; Słomczyński, W. Random dynamical systems arising from iterated function systems with place-dependent probabilities. Stat. Probab. Lett. 2000, 50, 401–407. [Google Scholar] [CrossRef]
Figure 1. The diagram of auto-regulated gene expression with pre-mRNA, mRNA and protein contribution. Description of the parameters: q 0 and q 1 are switching intensity functions; R is the speed of synthesis of pre-mRNA molecules if the gene is active; C is the rate of converting pre-mRNA into active mRNA molecules; P is the rate of converting mRNA into protein molecules; μ P R is the pre-mRNA degradation rate; μ R is the mRNA degradation rate; μ P is the protein degradation rate. The sum C + μ P R should be treated as a total degradation rate of the pre-mRNA particles.
Figure 1. The diagram of auto-regulated gene expression with pre-mRNA, mRNA and protein contribution. Description of the parameters: q 0 and q 1 are switching intensity functions; R is the speed of synthesis of pre-mRNA molecules if the gene is active; C is the rate of converting pre-mRNA into active mRNA molecules; P is the rate of converting mRNA into protein molecules; μ P R is the pre-mRNA degradation rate; μ R is the mRNA degradation rate; μ P is the protein degradation rate. The sum C + μ P R should be treated as a total degradation rate of the pre-mRNA particles.
Genes 12 00648 g001
Figure 2. A solution of Equation (4) for δ = 1 , R = 1 , μ P R = 1 4 , C = μ R = 1 4 , P = μ P = 1 3 with i = 0 on the left and i = 1 on the right.
Figure 2. A solution of Equation (4) for δ = 1 , R = 1 , μ P R = 1 4 , C = μ R = 1 4 , P = μ P = 1 3 with i = 0 on the left and i = 1 on the right.
Genes 12 00648 g002
Figure 3. Figure presents the results obtained for the system (12) with a = 1 2 , b = 3 4 , c = 2 3 , δ = 1 , the initial conditions ( ξ 1 ( 0 ) , ξ 2 ( 0 ) , ξ 3 ( 0 ) ) = ( 0 , 0 , 0 ) and constant probabilities p 0 ( x ) 0.5 , p 1 ( x ) 0.5 after 100,000 iterations.
Figure 3. Figure presents the results obtained for the system (12) with a = 1 2 , b = 3 4 , c = 2 3 , δ = 1 , the initial conditions ( ξ 1 ( 0 ) , ξ 2 ( 0 ) , ξ 3 ( 0 ) ) = ( 0 , 0 , 0 ) and constant probabilities p 0 ( x ) 0.5 , p 1 ( x ) 0.5 after 100,000 iterations.
Genes 12 00648 g003
Figure 4. Visualization of the stochastic process (4), depending on two non-negative Borel measurable functions p 0 ( x ) = 1 2 ( 1 + | x | 2 ) and p 1 ( x ) = 1 p 0 ( x ) = 1 + 2 | x | 2 2 ( 1 + | x | 2 ) .
Figure 4. Visualization of the stochastic process (4), depending on two non-negative Borel measurable functions p 0 ( x ) = 1 2 ( 1 + | x | 2 ) and p 1 ( x ) = 1 p 0 ( x ) = 1 + 2 | x | 2 2 ( 1 + | x | 2 ) .
Genes 12 00648 g004
Figure 5. This figure presents all states of the stochastic process (36) with a = 1 2 , b = 3 4 , c = 2 3 after two switches. The red color represents the states given by the Formula (45) representing trajectories of the process starting with i = 0 while the blue color represents the states given by the Formula (44) representing trajectories of the process starting with i = 1 .
Figure 5. This figure presents all states of the stochastic process (36) with a = 1 2 , b = 3 4 , c = 2 3 after two switches. The red color represents the states given by the Formula (45) representing trajectories of the process starting with i = 0 while the blue color represents the states given by the Formula (44) representing trajectories of the process starting with i = 1 .
Genes 12 00648 g005
Figure 6. Figure presents the set described by formula (48) with the values of parameters: a = 1 2 , b = 3 4 , c = 2 3 . Compare with Figure 3 and Figure 5.
Figure 6. Figure presents the set described by formula (48) with the values of parameters: a = 1 2 , b = 3 4 , c = 2 3 . Compare with Figure 3 and Figure 5.
Genes 12 00648 g006
Figure 7. Figure presents the set described by Formula (50) with the values of parameters: a = 1 2 , b = 3 4 , c = 2 3 .
Figure 7. Figure presents the set described by Formula (50) with the values of parameters: a = 1 2 , b = 3 4 , c = 2 3 .
Genes 12 00648 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tomski, A.; Zakarczemny, M. Stochastic Gene Expression Revisited. Genes 2021, 12, 648. https://doi.org/10.3390/genes12050648

AMA Style

Tomski A, Zakarczemny M. Stochastic Gene Expression Revisited. Genes. 2021; 12(5):648. https://doi.org/10.3390/genes12050648

Chicago/Turabian Style

Tomski, Andrzej, and Maciej Zakarczemny. 2021. "Stochastic Gene Expression Revisited" Genes 12, no. 5: 648. https://doi.org/10.3390/genes12050648

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop