Next Article in Journal
Monitoring the Performance of Petrochemical Organizations in Saudi Arabia Using Data Envelopment Analysis
Next Article in Special Issue
On the Integral of the Fractional Brownian Motion and Some Pseudo-Fractional Gaussian Processes
Previous Article in Journal
Properties for ψ-Fractional Integrals Involving a General Function ψ and Applications
Previous Article in Special Issue
Retrieving a Context Tree from EEG Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Poincaré-Type Inequalities for Compact Degenerate Pure Jump Markov Processes

by
Pierre Hodara
1,* and
Ioannis Papageorgiou
2
1
Institut National de la Recherche Agronomique (INRA), MaIAGE, Allee de Vilvert, 78352 Jouy-en-Josas, France
2
Neuromat, Instituto de Matematica e Estatistica, Universidade de Sao Paulo, Sao Paulo SP-CEP 05508-090, Brasil
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(6), 518; https://doi.org/10.3390/math7060518
Submission received: 9 May 2019 / Revised: 31 May 2019 / Accepted: 1 June 2019 / Published: 6 June 2019
(This article belongs to the Special Issue Stochastic Processes in Neuronal Modeling)

Abstract

:
We aim to prove Poincaré inequalities for a class of pure jump Markov processes inspired by the model introduced by Galves and Löcherbach to describe the behavior of interacting brain neurons. In particular, we consider neurons with degenerate jumps, i.e., which lose their memory when they spike, while the probability of a spike depends on the actual position and thus the past of the whole neural system. The process studied by Galves and Löcherbach is a point process counting the spike events of the system and is therefore non-Markovian. In this work, we consider a process describing the membrane potential of each neuron that contains the relevant information of the past. This allows us to work in a Markovian framework.

1. Introduction

The aim of this paper is to prove Poincaré inequalities for the semigroup P t , as well as for the invariant measure, of the model introduced in [1] by Galves and Löcherbach, to describe the activity of a biological neural network. What is particularly interesting about the jump process in question is that it is characterized by degenerate jumps, in the sense that after a particle (neuron) spikes, it loses its memory by jumping to zero. Furthermore, the probability of a spike of a particular neuron at any time depends on its actual position and thus the past of the whole neural system.
For P t the associated semigroup, first we prove some Poincaré-type inequalities of the form
V a r P t ( f ( x ) ) α ( t ) 0 t P s Γ ( f , f ) ( x ) d s + β i = 1 n 0 t P s Γ ( f , f ) ( Δ i ( x ) ) d s .
for any possible starting configuration x. We give here the general form of the type of inequalities investigated in this paper; however to avoid overloading this introduction with technical details we postpone the definitions of classical quantities such as the “carré du champ” Γ ( f , f ) and other notations used here to Section 1.3.
Then, we restrict ourselves to the special case where the initial configuration x is in the domain of the invariant measure, and we derive the stronger Poincaré inequality
V a r P t ( f ( x ) ) α ( t ) P t Γ ( f , f ) ( x ) + β 0 t P s Γ ( f , f ) ( x ) d s .
Then we show a Poincaré inequality for the invariant measure π
V a r π ( f ) c π Γ ( f , f ) .
Before we describe the model, we present the neuroscience framework of the problem.

1.1. The Neuroscience Framework

The activity of one neuron is described by the evolution of its membrane potential. This evolution presents from time to time a brief and high-amplitude depolarization called an action potential or spike. The spiking probability or rate of a given neuron depends on the value of its membrane potential. These spikes are the only perturbations of the membrane potential that can be transmitted from one neuron to another through chemical synapses. When a neuron i spikes, its membrane potential is reset to 0 while the so-called “post-synaptic neurons” influenced by neuron i receive an additional amount of membrane potential.
From a probabilistic point of view, this activity can be described by a simple point process since the whole activity is characterized by the jump times. In the literature, Hawkes processes are often used to describe systems of interacting neurons, see [1,2,3,4,5,6] for example. The reset to 0 of the spiking neuron provides a variable length memory for the dynamic and therefore point processes describing these systems are non-Markovian.
On the other hand, it is possible to describe the activity of the network with a process modeling not only the jump times but the whole evolution of the membrane potential of each neuron. This evolution needs then to be specified between the jumps. In [7] the process describing this evolution follows a deterministic drift between the jumps, more precisely the membrane potential of each neuron is attracted with exponential speed towards an equilibrium potential. This process is then Markovian and belongs to the family of Piecewise Deterministic Markov Processes introduced by Davis ([8,9]). Such processes are widely used in probability modeling of e.g., biological or chemical phenomena (see e.g., [10] or [11], see [12] for an overview). The point of view we adopt here is close to this framework, but we work without drift between the jumps. We therefore consider a pure jump Markov process and will make use of the abbreviation PJMP in the rest of the present work.
We consider a process X t = ( X t 1 , , X t N ) , where N is the number of neurons in the network and where for each neuron i , 1 i N and each time t R + , each variable X t i represents the membrane potential of neuron i at time t . Each membrane potential X t i takes value in R + . A neuron with membrane potential x “spikes” with intensity ϕ ( x ) , where ϕ : R + R + is a given intensity function. When a neuron i fires, its membrane potential is reset to 0 , interpreted as resting potential, while the membrane potential of any post-synaptic neuron j is increased by W i j 0 . Between two jumps of the system, the membrane potential of each neuron is constant.
Working with Hawkes processes allows consideration of systems with infinitely many neurons, as in [1] or [6]. For our purpose, we need to work in a Markovian framework and therefore our process represents the membrane potentials of the neurons, considering a finite number N of neurons.

1.2. The Model

Let N > 1 be fixed and ( N i ( d s , d z ) ) i = 1 , , N be a family of i.i.d. Poisson random measures on R + × R + with intensity measure d s d z . We study the Markov process X t = ( X t 1 , , X t N ) taking values in R + N and solving, for i = 1 , , N , for t 0 ,
X t i = X 0 i 0 t 0 X s i 1 { z ϕ ( X s i ) } N i ( d s , d z ) + j i W j i 0 t 0 1 { z ϕ ( X s j ) } 1 { X s i m W j i } N j ( d s , d z ) .
In the above equation for each j i , W j i R + is the synaptic weight describing the influence of neuron j on neuron i . Finally, the function ϕ : R + R + is the intensity function.
This can be seen in the following way. The first term X 0 i is the starting point of the process at time 0 . The second term corresponds to the reset to 0 of neuron i when it spikes. The point process N i gives the times where a spike of neuron i can occur which actually happens with rate ϕ ( X s i ) leading to a reset to 0 of neuron i due to the term X s i . The third term corresponds to the modification of membrane potential of post-synaptic neurons influenced by neuron i . The modification value is given by the synaptic weight W j i provided the new value is smaller than the maximum potential m , which is ensured by the second indicatrix function.
The generator of the process X is given for any test function f : R + N R and x R + N is described by
L f ( x ) = i = 1 N ϕ ( x i ) f ( Δ i ( x ) ) f ( x )
where
( Δ i ( x ) ) j = x j + W i j j i and x j + W i j m x j j i and x j + W i j > m 0 j = i
for some m > 0 . With this definition the process remains inside the compact set
D : = { x R + N : x i m , 1 i N } .
Furthermore, we also assume the following conditions about the intensity function:
ϕ ( x ) > c x + δ for x R +
for some strictly positive constants c and δ .
The probability for a neuron to spike grows with its membrane potential so it is natural to think of the function ϕ as an increasing function. Condition (5) implies that this growth is at least linear, and models the spontaneous activity of the system: whatever the configuration x is, the system will always have a positive spiking rate.

1.3. Poincaré-Type Inequalities

Our purpose is to show Poincaré-type inequalities for our PJMP, whose dynamic is similar to the model introduced in [1]. The main difference between our framework and the one of [1] relies in the fact that as in [7], we study a process modeling the membrane potential of each neuron instead of a point process focusing only on the spiking events. Focusing on the spike train is sufficient to describe the network activity since it contains all the relevant information. However, the membrane potential integrates each relevant spike that occurred in the past and this gives us a Markovian framework. Regardless of the point of view, the notable difference in dynamics with [1] is the absence of drift between jumps. We assume here that there is no loss of memory between spikes. Therefore, our conclusions cannot directly apply to the model studied in [1].
We will investigate Poincaré-type inequalities at first for the semigroup P t and then for the invariant measure π . Concerning the semigroup inequality, we will study two different cases. The first, the general one, where the system starts from any possible initial configuration. Then, we restrict to initial configurations that belong to the domain of the invariant measure.
Let us first describe the general framework and define the Poincaré inequalities on a discrete setting (see also [13,14,15,16,17]). At first we should note a convention we will widely use. For a function f and measure ν we will write ν ( f ) for the expectation of the function f with respect to the measure ν that is
ν ( f ) = f d ν .
We consider a Markov process ( X t ) t 0 which is described by the infinitesimal generator L and the associated Markov semigroup P t f ( x ) = E x ( f ( X t ) ) . For a semigroup and its associated infinitesimal generator we will need the following well know relationships: d d s P s = L P s = P s L (see for example [18]).
We define π to be the invariant measure for the semigroup ( P t ) t 0 if and only if
π P t = π .
Furthermore, we define the “carré du champ” operator (see [19]) by:
Γ ( f , g ) : = 1 2 ( L ( f g ) f L g g L f ) .
For more details on this important operator and the inequalities that relate to it one can look at [18,19,20]. For the PJMP process defined above with the specific generator L given by (2) a simple calculation shows that the carré du champ takes the following form.
Γ ( f , f ) = 1 2 ( L f 2 2 f L f ) = 1 2 ( i = 1 N ϕ ( x i ) f ( Δ i ( x ) ) f ( x ) 2 ) .
We say that a measure ν satisfies a Poincaré inequality if there exists constant C > 0 independent of f, such that
( S G ) V a r ν ( f ) C ν ( Γ ( f , f ) )
where the variance of a function f with respect to a measure ν is defined with the usual way as: V a r ν ( f ) = ν ( f ν ( f ) ) 2 . It should be noted that in the case where the measure ν is the semigroup P t , then the constant C may depend on t, C = C ( t ) .
For the Poincaré inequality for continuous time Markov chains one can look in [16,21]. In [13,14,17], the Poincaré inequality (SG) for P t has been shown for some point processes, for a constant that depends on time t, while the stronger log-Sobolev inequality, has been disproved. The general method used in these papers that will be followed also in the current work, is based on the so-called semigroup method which shows the inequality for the semigroup P t .
The main difficulty here is that for the pure jump Markov process that we examine in the current paper, the translation property
E x + y f ( z ) = E x f ( z + y )
used in [13,17] does not hold here. This appears to be important because the translation property is a key element in these papers, since it allows to bound the carré du champ by comparing the mean E x f ( z ) where the process starts from position x with the mean E Δ i ( x ) f ( z ) where it starts from Δ i ( x ) , the jump-neighbor of x . However, we can still obtain Poincaré-type inequalities, but with a constant C ( t ) which is a polynomial of order higher than one. This power is higher than the constant C ( t ) = t , the optimal obtained in [17] for a path space of Poisson point processes.
It should be noted that the aforementioned translation property relates with the Γ 2 criterion (see [22,23]) for the Poincaré inequality, which states that if
Γ 2 ( f ) : = 1 2 L ( Γ ( f , f ) ) 2 Γ ( f , L l ) 0 ,
then the Poincaré inequality is satisfied. A more detailed discussion on this criterion follows later in Section 2.2. Since this is not satisfied in our case we obtain a Poincaré-type inequality instead.
Before we present the results of the paper it is important to highlight an important distinction on the nature of the initial configuration from which the process can start. We can classify the initial configurations according to the return probability to them. Recall that the membrane potential x i of every neuron i takes positive values within a compact set and that whenever a neuron j different than i spikes, the neuron i jumps W i j positions up, while the only other movement it does is jumping to zero when it spikes. That means that every variable x i can jump down only to zero while after the first jump, can only pass from a finite number of possible positions since the state-space is then discrete inside a compact due to the imposed maximum potential m . Since the neurons stay still between spikes, that implies that there is a finite number of possible configurations to which X = ( X 1 , X N ) can return after every neuron has spiked for the first time. This is the domain of the invariant measure π of the semigroup P t , and we will denote it as D ^ . Thus, if the initial configuration x = ( x 1 , x N ) does not belong to D ^ , after the process enters D ^ , it will never return to this initial configuration.
It should be noted that it is easy to find initial configurations x = ( x 1 , x N ) D ^ . For example one can consider any x such that at least one of the x i s is not a sum of synaptic weights W j i , or any x with x i = x j for every i and j.
Below we present the Poincaré inequality for the semigroup P t for general starting configurations.
Theorem 1.
Assume the PJMP as described in (2)–(5). Then, for every x D , the following Poincaré-type inequality holds.
V a r E x ( f ( x t ) ) α ( t ) 0 t P s Γ ( f , f ) ( x ) d s + β i = 1 n 0 t P s Γ ( f , f ) ( Δ i ( x ) ) d s
with α ( t ) a second order polynomial of the time t that does not depend on the function f and β a constant.
One notices that since the coefficient α ( t ) is a polynomial of second order and β is a constant, the first term dominates over the second for long time t, as shown in the next corollary.
Corollary 1.
Assume the PJMP as described in (2)–(5). Then, for every x D , and t sufficiently large, i.e., t > ζ ( f )
V a r E x ( f ( x t ) ) 2 α ( t ) 0 t P w Γ ( ( f , f ) ) ( x ) d w
where ζ ( f ) is a constant depending only on f ,
ζ ( f ) = max x D β i = 1 n Γ ( f , f ) ( Δ i ( x ) ) Γ ( f , f ) ( x ) 1 2 .
One should notice that although the lower value ζ ( f ) depends on the function f, the coefficient 2 α ( t ) of the inequality does not depend on the function f.
The proof of the Poincaré inequality for the general initial configuration is presented in Section 2.
In the special case where x is on the domain of the invariant measure we obtain the stronger inequality.
Theorem 2.
Assume the PJMP as described in (2)–(5). Then, there exists a t 1 > 0 such that for every t > t 1 and for every x D ^ , the following Poincaré-type inequality holds.
V a r E x ( f ( x t ) ) γ ( t ) P t Γ ( f , f ) ( x ) + 2 0 t P s Γ ( f , f ) ( x ) d s .
with γ ( t ) a third order polynomial of the time t that do not depend on the function f.
As in the general case, for t large enough we have the following corollary.
Corollary 2.
Assume the PJMP as described in (2)–(5). Then, there exists a t 1 > 0 , such that for every x D ^ , and t sufficiently large, i.e., t > max { ξ ( f ) , t 1 }
V a r E x ( f ( x t ) ) 2 γ ( t ) P t Γ ( f , f ) ( x )
where ξ ( f ) is a constant depending only on f ,
ξ ( f ) = 2 max x D ^ Γ ( f , f ) ( x ) min x D ^ Γ ( f , f ) ( x ) 1 2 .
We conclude this section with the Poincaré inequality for the invariant measure π presented on the next theorem.
Theorem 3.
Assume the PJMP as described in (2)–(5). Then π satisfies a Poincaré inequality
π f π f 2 C 0 π ( Γ ( f , f ) )
for some constant C 0 > 0 .

2. Proof of the Poincaré for General Initial Configurations

In this section, we focus on neurons that start with values on any possible initial configuration x D as described by (2)–(5), and we prove the local Poincaré inequalities presented in Theorem 1 and Corollary 1. Let us first state some technical results.

2.1. Technical Results

We start by showing properties of the jump probabilities of the degenerate PJMP processes. Since the process is constant between jumps, the set of reachable positions y after a given time t for a trajectory starting from x is discrete. We therefore define
π t ( x , y ) : = P x ( X t = y )   and   D x : = { y D , π t ( x , y ) > 0 } .
This set is finite for the following reasons. On one hand, for each neuron i I , the set S i = { 0 } { k = 1 n W j k i , n N * , j k I } is discrete and such that the intersection with any compact is finite. On the other hand, we have D x i I S i ( x i + S i ) D .
The idea is that since the process is constant between jumps, elements of D x are such that there exists a sequence of jumps leading from x to y . Since we are only interested on the arrival position y , among all jump sequences leading to y , we can consider only sequences with minimal number of jumps and the number of such jump sequences leading to positions inside a compact is finite, due to the fact that each W j i is non-negative.
Since x is also in the compact D , we can have an upper bound for the cardinal of D x independent from x .
For a given time s R + and a given position x D , we denote by p s ( x ) the probability that starting at time 0 from position x , the process has no jump in the interval [ 0 , s ] , and for a given neuron i I by p s i ( x ) the probability that the process has exactly one jump of neuron i and no jumps for other neurons.
Introducing the notation ϕ ¯ ( x ) = j I ϕ ( x j ) and given the dynamics of the model, we have that
p s ( x ) = e s ϕ ¯ ( x )
and
p s i ( x ) = 0 s ϕ ( x i ) e u ϕ ¯ ( x ) e ( s u ) ϕ ¯ ( Δ i ( x ) ) d u               = ϕ ( x i ) ϕ ¯ ( x ) ϕ ¯ ( Δ i ( x ) ) e s ϕ ¯ ( Δ i ( x ) ) e s ϕ ¯ ( x ) if ϕ ¯ ( Δ i ( x ) ) ϕ ¯ ( x ) s ϕ ( x i ) e s ϕ ¯ ( x ) if ϕ ¯ ( Δ i ( x ) ) = ϕ ¯ ( x ) .
Define
t 0 = l n ϕ ¯ ( x ) l n ϕ ¯ ( Δ i ( x ) ) ϕ ¯ ( x ) ϕ ¯ ( Δ i ( x ) ) if ϕ ¯ ( Δ i ( x ) ) ϕ ¯ ( x ) 1 ϕ ¯ ( x ) if ϕ ¯ ( Δ i ( x ) ) = ϕ ¯ ( x ) .
As a function of s , p s i ( x ) is continuous, strictly increasing on ( 0 , t 0 ) and strictly decreasing on ( t 0 , + ) and we have p 0 i ( x ) = 0 .
Lemma 1.
Assume the PJMP as described in (2)–(5).There exists positive constants C 1 and C 2 independent of t , x and y such that
  • For all t > t 0 , we have
    y D x π t 2 ( Δ i ( x ) , y ) π t ( x , y ) C 1 .
  • For all t t 0 , we have
    y D x \ { Δ i ( x ) } π t 2 ( Δ i ( x ) , y ) π t ( x , y ) C 1
    and
    π t 2 ( Δ i ( x ) , Δ i ( x ) ) π t ( x , Δ i ( x ) ) π t ( Δ i ( x ) , Δ i ( x ) ) π t ( x , Δ i ( x ) ) C 2 t .
Proof. 
As said before, the set D x is finite so it is sufficient to obtain an upper bound for the ratio π t 2 ( Δ i ( x ) , y ) π t ( x , y ) .
We have for all s ( 0 , t )
π t 2 ( Δ i ( x ) , y ) π t ( x , y ) π t s ( Δ i ( x ) , y ) p s ( y ) + sup z D ( 1 p s ( z ) ) 2 p s i ( x ) π t s ( Δ i ( x ) , y ) .
Here we decomposed the numerator according to two events. Either X t s = y and there no jump in the interval of time [ t s , t ] or there is at least one jump in the interval of time [ t s , t ] , whatever the position z D of the process at time t s .
From the previous inequality, we then obtain
π t 2 ( Δ i ( x ) , y ) π t ( x , y ) π t s ( Δ i ( x ) , y ) p s ( y ) + ( 1 e s N ϕ ( m ) ) 2 p s i ( x ) π t s ( Δ i ( x ) , y ) .
where we recall that the constant m appears in the definition of the compact set D introduced in (4).
Let us first assume that t > t 0 . Recall that t 0 is defined in (7).
If π t t 0 ( Δ i ( x ) , y ) p t 0 i ( x ) , we have
π t 2 ( Δ i ( x ) , y ) π t ( x , y ) 1 p t 0 i ( x ) 2 .
Assume now that π t t 0 ( Δ i ( x ) , y ) < p t 0 i ( x ) and let us recall that as a function of s , p s i ( x ) is continuous, strictly increasing on ( 0 , t 0 ) and p 0 i ( x ) = 0 .
On the other hand, as a function of s , π t s ( Δ i ( x ) , y ) is continuous and takes value π t ( Δ i ( x ) , y ) > 0 for s = 0 .
We deduce from this that there exists s * ( 0 , t 0 ) such that p s * i ( x ) = π t s * ( Δ i ( x ) , y ) .
Now (9) with s = s * gives us
π t 2 ( Δ i ( x ) , y ) π t ( x , y ) ( p s * ( y ) ) 2 + 2 p s * ( y ) 1 e s * N ϕ ( m ) p s * i ( x ) + 1 e s * N ϕ ( m ) p s * i ( x ) 2 .
For all s ( 0 , t 0 ) , p s ( y ) 1 , and we then study 1 e s N ϕ ( m ) p s i ( x ) as a function of s ( 0 , t 0 ) .
Using the explicit value of p s i ( x ) given in (6) and assumption (5), we obtain for all s ( 0 , t 0 ) ,
1 e s N ϕ ( m ) p s i ( x ) e t 0 N ϕ ( m ) δ ϕ ¯ ( x ) ϕ ¯ ( Δ i ( x ) ) 1 e s N ϕ ( m ) 1 e s ϕ ¯ ( x ) ϕ ¯ ( Δ i ( x ) ) if ϕ ¯ ( Δ i ( x ) ) ϕ ¯ ( x ) e t 0 N ϕ ( m ) δ 1 e s s if ϕ ¯ ( Δ i ( x ) ) = ϕ ¯ ( x ) .
Recall that δ > 0 is defined in assumption (5) and satisfies ϕ ( x ) δ for all x R + .
In both cases, when s is far from zero, we can obtain an upper bound independent of x , and when s goes to zero, the limit of the right-hand term is N ϕ ( m ) e t 0 N ϕ ( m ) δ .
From this, we deduce that there exists a constant M D such that for all s ( 0 , t 0 ) ,
1 e s N ϕ ( m ) p s i ( x ) M D .
Putting all together, we obtain the announced result for the case where t > t 0 .
We now consider the case where t t 0 .
We start by considering the case where y Δ i ( x ) and go back to (9).
As a function of s , π t s ( Δ i ( x ) , y ) is continuous and takes values π t ( Δ i ( x ) , y ) > 0 and π 0 ( Δ i ( x ) , y ) = 0 respectively for s = 0 and s = t .
We deduce from this that there exists s * ( 0 , t ) ( 0 , t 0 ) such that p s * i ( x ) = π t s * ( Δ i ( x ) , y ) and we are back in the previous case so that the same result holds.
Let us assume now that y = Δ i ( x ) , we have
π t 2 ( Δ i ( x ) , y ) π t ( x , y ) π t ( Δ i ( x ) , Δ i ( x ) ) π t ( x , Δ i ( x ) ) 1 p t i ( x ) .
Recall the explicit expression of p t i ( x ) given in (6) and use (5) to bound the intensity function
p t i ( x ) = t ϕ ( x i ) e t ϕ ¯ ( x ) t δ e t 0 sup x D ϕ ( x ) = C t
for some constant C independent of t and x , which gives us the announced result. □
Taking under account the last result, we can obtain the first technical bound needed in the proof of the local Poincaré inequality.
Lemma 2.
Assume the PJMP as described in (2)–(5). Then
t 0 t s ( E Δ i ( x ) E x ) j = 1 N ϕ ( x u j ) ( f ( Δ j ( x u ) ) f ( x u ) ) d u 2 ( t s ) ( 1 + C 1 ) M t 0 t s E x Γ ( ( f , f ) ) ( x u ) .
Proof. 
Consider π t ( x , y ) to probability kernel of E x , i.e., E x ( f ( x t ) ) = y π t ( x , y ) f ( y ) . Then we can write
II 1 : = t 0 t s ( E Δ i ( x ) E x ) j = 1 N ϕ ( x u j ) ( f ( Δ j ( x u ) ) f ( x u ) ) d u 2 = t 0 t s y π u ( x , y ) ( π u ( Δ i ( x ) , y ) π u ( x , y ) 1 ) j = 1 N ϕ u ( y j ) ( f ( Δ j ( y ) ) f ( y ) ) d u 2 .
To continue we will use Holder’s inequality to pass the second power inside the first integral, which will give
II 1 ( t s ) t 0 t s y π u ( x , y ) ( π u ( Δ i ( x ) , y ) π u ( x , y ) 1 ) j = 1 N ϕ u ( y j ) ( f ( Δ j ( y ) ) f ( y ) ) 2 d u
and then we apply the Cauchy-Schwarz inequality for the measure E x to get
II 1 ( t s ) t 0 t s E x π u ( Δ i ( x ) , y ) π u ( x , y ) 1 2 E x j = 1 N ϕ ( y u j ) ( f ( Δ j ( y u ) ) f ( y u ) ) : = S 2 d u .
The first quantity involved in the above integral is bounded from Lemma 1 by a constant
E x π u ( Δ i ( x ) , y ) π u ( x , y ) 1 2 1 + E x π u ( Δ i ( x ) , y ) π u ( x , y ) 2 = 1 + y D x π u 2 ( Δ i ( x ) , y ) π u ( x , y ) 1 + C 1
while for the sum involved in the second quantity, since ϕ ( x ) δ > 0 we can use Holder’s inequality
S = j = 1 N ϕ ( y u j ) 2 j = 1 N ϕ ( y u j ) j = 1 N ϕ ( y u j ) ( f ( Δ j ( y u ) ) f ( y u ) ) 2 M j = 1 N ϕ ( y u j ) ( f ( Δ j ( y u ) ) f ( y u ) ) 2 = M Γ ( f , f ) ( x u )
where M = sup x D i = 1 N ϕ ( x i ) , so that
II 1 ( t s ) ( 1 + C 1 ) M t 0 t s E x Γ ( ( f , f ) ) ( x u ) d u .
 □
We will now extend the last bound to an integral on a time domain starting at 0 .
Lemma 3.
For the PJMP as described in (2)–(5), we have
0 t s E Δ i ( x ) ( L f ( x u ) ) E x ( L f ( x u ) ) d u 2 16 t 0 2 M Γ ( f , f ) ( Δ i ( x ) ) + c ( t s ) 0 t s E x Γ ( ( f , f ) ) ( x u ) d u
where c ( t ) = t 0 8 M ( C 1 + 1 ) + 2 t ( 1 + C 1 ) M .
Proof. 
To calculate a bound for
E Δ i ( x ) ( L f ( x u ) ) E x ( L f ( x u ) )
we will need to control the ration π u 2 ( Δ i ( x ) , y ) π u ( x , y ) . As shown in Lemma 1, this ratio depends on time when u t 0 , otherwise it is bounded by a constant. For this reason, we will start by breaking the integration variable of the time t into two domains, ( 0 , t 0 ) and ( t 0 , t s ) .
I 1 : = 0 t s E Δ i ( x ) ( L f ( x u ) ) E x ( L f ( x u ) ) d u 2 2 t 0 t s ( E Δ i ( x ) E x ) i = 1 N ϕ ( x u i ) ( f ( Δ i ( x u ) f ( x u ) ) d u 2 II 1 + 2 0 t 0 ( E Δ i ( x ) E x ) i = 1 N ϕ ( x u i ) ( f ( Δ i ( x u ) ) f ( x u ) ) d u 2 : = II 2 .
The first summand II 1 is upper bounded by the previous lemma. To bound the second term II 2 on the right-hand side of (13) we write
II 2 2 0 t 0 ( π u ( Δ i ( x ) , Δ i ( x ) ) π u ( x , Δ i ( x ) ) ) i = 1 N ϕ ( Δ i ( x ) i ) ( f ( Δ i ( Δ i ( x ) ) f ( Δ i ( x ) ) ) d u 2 : = III 1 + 2 0 t 0 ( y D , y Δ i ( x ) ( π u ( Δ i ( x ) , y ) π u ( x , y ) ) i = 1 N ϕ ( y i ) ( f ( Δ i ( y ) f ( y ) ) d u 2 : = III 2 .
The distinction on the two cases, whether after time u the neurons configuration is Δ i ( x ) or not, relates to the two different bounds Lemma 1 provides for the fraction π t 2 ( Δ i ( x ) , y ) π t ( x , y ) whether y is Δ i ( x ) or not. We will first calculate the second term on the right-hand side. For this term we will work similar to Lemma 2. At first we will apply the Holder inequality on the time integral after we first divide with the normalization constant t 0 . This will give
III 2 t 0 0 t 0 y D , y Δ i ( x ) ( π u ( Δ i ( x ) , y ) π u ( x , y ) 1 ) π u ( x , y ) i = 1 N ϕ ( y i ) ( f ( Δ i ( y ) ) f ( y ) 2 d u .
Now we will use the Cauchy-Schwarz inequality in the first sum. We will then obtain the following
III 2 t 0 0 t 0 y D , y Δ i ( x ) π u ( x , y ) ( π u ( Δ i ( x ) , y ) π u ( x , y ) 1 ) 2 y D , y Δ i ( x ) π u ( x , y ) i = 1 N ϕ ( y i ) ( f ( Δ i ( y ) ) f ( y ) 2 d u .
The first term on the last product can be upper bounded from Lemma 1
y D , y Δ i ( x ) π u ( x , y ) ( π u ( Δ i ( x ) , y ) π u ( x , y ) 1 ) 2 y D , y Δ i ( x ) ( π u 2 ( Δ i ( x ) , y ) π u ( x , y ) + π u ( x , y ) ) ( C 1 + 1 ) .
Meanwhile for the second term involved in the product of (15) we can write
y D , y Δ i ( x ) π u ( x , y ) i = 1 N ϕ ( y i ) ( f ( Δ i ( y ) ) f ( y ) 2 y D , y Δ i ( x ) π u ( x , y ) ( i = 1 N ϕ ( y i ) ) i = 1 N ϕ ( y i ) ( f ( Δ i ( y ) ) f ( y ) ) 2 M y π u ( x , y ) Γ ( f , f ) ( y ) = M E x Γ ( f , f ) ( x u )
where for the first bound we made use once more of the Holder inequality, after we divided with the appropriate normalization constant i = 1 N ϕ ( x u i ) . If we put the last bound together with (16) into (15), we obtain
III 2 t 0 M ( C 1 + 1 ) 0 t 0 E x Γ ( f , f ) ( x u ) d u .
We now calculate the first summand of (14). Notice that in this case we cannot use the analogue bound from Lemma 1, that is π t 2 ( Δ i ( x ) , Δ i ( x ) ) π t ( x , Δ i ( x ) ) C 2 t , as we did for III 2 , since that will lead to a final upper bound III 1 t 0 M ( C 1 + 1 ) 0 t 0 1 u E x Γ ( f , f ) ( x u ) d u which may diverge. Instead, we will bound the III 1 by the carré du champ of the function after the first jump. We can write
III 1 4 0 t 0 ( i = 1 N ϕ ( Δ i ( x ) i ) | f ( Δ i ( Δ i ( x ) ) ) f ( Δ i ( x ) ) | d u 2 4 t 0 2 ( i = 1 N ϕ ( Δ i ( x ) i ) ) 2 i = 1 N ϕ ( Δ i ( x ) i ) i = 1 N ϕ ( Δ i ( x ) i ) | f ( Δ i ( Δ i ( x ) ) ) f ( Δ i ( x ) ) | 2
where above we divided with the normalization constant i = 1 N ϕ ( Δ i ( x ) i ) , since ϕ ( x ) δ . We can now apply the Holder inequality on the sum, so that
III 1 4 t 0 2 M ( i = 1 N ϕ ( Δ i ( x ) i ) ( f ( Δ i ( Δ i ( x ) ) ) f ( Δ i ( x ) ) 2 ) = 4 t 0 2 M Γ ( f , f ) ( Δ i ( x ) ) .
If we combine this together with (17) and (14) we get the following bound for the second term of (13)
II 2 8 t 0 2 M Γ ( f , f ) ( Δ i ( x ) ) + 4 ( C 1 + 1 ) t 0 M 0 t 0 E x Γ ( f , f ) ( x u ) d u .
The last one together with the bound shown in Lemma 2 for the first term II 1 of (13) gives
I 1 t 0 2 16 M Γ ( f , f ) ( Δ i ( x ) ) + t 0 8 M ( C 1 + 1 ) 0 t 0 E x Γ ( f , f ) ( x u ) d u + 2 ( t s ) ( 1 + C 1 ) M t 0 t s E x Γ ( ( f , f ) ) ( x u ) d u t 0 2 16 M Γ ( f , f ) ( Δ i ( x ) ) + 2 M ( C 1 + 1 ) ( 4 t 0 + ( t s ) ) 0 t s E x Γ ( ( f , f ) ) ( x u ) d u .
since the carré du champ is non-negative, as shown below
Γ ( f , f ) = 1 2 ( L ( f 2 ) 2 f L f ) = lim t 0 1 2 t P t f 2 ( P t f ) 2 0
by Cauchy-Swartz inequality. □
We have obtained all the technical results that we need to show the Poincaré inequality for the semigroup P t for general initial configurations.

2.2. Proof of Theorem 1

Denote P t f ( x ) = E x f ( x t ) . Then
P t f 2 ( x ) ( P t f ( x ) ) 2 = 0 t d d s P s ( P t s f ) 2 ( x ) d s = 0 t P s Γ ( P t s f , P t s f ) ( x ) d s
since d d s P s = L P s = P s L .
We can write
Γ ( P t s f , P t s f ) ( x ) = i = 1 N ϕ ( x i ) ( E Δ i ( x ) f ( x t s ) E x f ( x t s ) ) 2 .
If we could use the translation property E x + y f ( z ) = E x f ( z + y ) used for instance in proving Poincaré and modified log-Sobolev inequalities in [13,17], then we could bound relatively easy the carré du champ of the expectation of the functions by the carré du champ of the functions themselves, as demonstrated below
i = 1 N ϕ ( x i ) ( E Δ i ( x ) f ( x t s ) E x f ( x t s ) ) 2 = i = 1 N ϕ ( x i ) ( E x f ( Δ i ( x t s ) ) E x f ( x t s ) ) 2 P t s Γ ( f , f ) ( x )
The inequality Γ ( P t f , P t f ) P t Γ ( f , f ) for t > 0 relates directly with the Γ 2 criterion (see [22,23]) which states that if Γ 2 ( f ) : = 1 2 L ( Γ ( f , f ) ) 2 Γ ( f , L l ) 0 then the Poincaré inequality is true, since
d d s ( P s Γ ( P t s f , P t s f ) ) = 1 2 P s ( L Γ ( P t s f , P t s f ) ) 2 Γ ( P t s f , L P t s f ) = P s ( Γ 2 ( P t s f ) ) 0
implies Γ ( P t f , P t f ) P t Γ ( f , f ) (see also [13]).
Unfortunately, this is not the case with our PJMP where the degeneracy of jumps and the memoryless nature of them allows any neuron x i to jump to zero from any position, with a probability that depends on the current configuration of the neurons. Moreover, contrary on the case of Poisson processes, our intensity also depends on the position.
To obtain the carré du champ of the functions we will make use of the Dynkin’s formula which will allow us to bound the expectation of a function with the expectation of the infinitesimal generator of the function which is comparable to the desired carré du champ of the function.
Therefore, from Dynkin’s formula
E x f ( x t ) = f ( x ) + 0 t E x ( L f ( x u ) ) d u
we get
E Δ i ( x ) f ( x t s ) E x f ( x t s ) 2 2 f ( Δ i ( x ) ) f ( x ) 2 + 2 0 t s E Δ i ( x ) ( L f ( x u ) ) E x ( L f ( x u ) ) d u 2 .
To bound the second term above we will use the bound shown in Lemma 3
E Δ i ( x ) f ( x t s ) E x f ( x t s ) 2 2 f ( Δ i ( x ) ) f ( x ) 2 + 32 t 0 2 M Γ ( f , f ) ( Δ i ( x ) ) + 2 c ( t s ) 0 t s E x Γ ( ( f , f ) ) ( x u ) d u
This together with (19) gives
Γ ( P t s f , P t s f ) ( x ) 2 Γ ( f , f ) ( x ) + 32 t 0 2 M i = 1 n ϕ ( x i ) Γ ( f , f ) ( Δ i ( x ) ) + 2 M c ( t s ) 0 t s E x Γ ( ( f , f ) ) ( x u ) d u .
Finally, plugging this in (18) we obtain
P t f 2 ( x ) ( P t f ( x ) ) 2 2 0 t P s Γ ( f , f ) ( x ) d s + 32 t 0 2 M 0 t P s i = 1 n ϕ ( x i ) Γ ( f , f ) ( Δ i ( x ) ) d s + 2 M 0 t c ( t s ) P s 0 t s E x Γ ( ( f , f ) ) ( x u ) d u d s .
For the second term we can bound ϕ by M. For the last term on the right-hand side, since the carré du champ is non-negative, we can get
P s 0 t s E x Γ ( ( f , f ) ) ( x u ) d u = P s s t P w s Γ ( ( f , f ) ) ( x ) d w P s 0 t P w s Γ ( ( f , f ) ) ( x ) d w = 0 t P w Γ ( ( f , f ) ) ( x ) d w .
where above we used the property of the Markov semigroup P s P w s = P w . Since this last quantity does not depend on s, and c ( t s ) c ( t ) we further get
0 t c ( t s ) P s 0 t s E x Γ ( ( f , f ) ) ( x u ) d u d s c ( t ) t 0 t P w Γ ( ( f , f ) ) ( x ) d w .
Putting everything together we finally obtain
P t f 2 ( x ) ( P t f ( x ) ) 2 ( 2 + 2 M c ( t ) t ) 0 t P s Γ ( f , f ) ( x ) d s + 32 t 0 2 M 2 i = 1 n 0 t P s Γ ( f , f ) ( Δ i ( x ) ) d s .
And so, the theorem follows for constants
α ( t ) = 2 + 2 M t c ( t ) = 2 + 2 M t t 0 8 M ( C 1 + 1 ) + 4 t 2 ( 1 + C 1 ) M 2 and β = 32 t 0 2 M 2 .

3. Proof of the Poincaré Inequalities for Starting Configuration on the Domain of the Invariant Measure

We start by showing that in the case where the initial configuration belongs on the domain x D ^ of the invariant measure, we obtain a strong lower bound for the probabilities π t ( x , y ) = P x ( X t = y ) , for time t big enough, as presented on the following lemma.
Lemma 4.
Assume the PJMP as described in (2)–(5). Then, for every x D ^ and y D x
π t ( x , y ) 1 θ
for t t 1 = 1 δ + t 0 .
Proof. 
Since D ^ D is finite, we know that there exists a strictly positive constant η , such that π ( x ) > η > 0 for every x D ^ . Since, lim t π t ( x , y ) = π ( y ) for every x D ^ , such that y D x , we obtain that there exists a θ > 0 such that π t ( x , y ) > 1 θ for every x D ^ , such that y D x , since D ^ D is finite. □
Taking under account the last result, we can obtain the first technical bound needed in the proof of the local Poincaré inequality, taking advantage of the bounds shown for times bigger than t 1 .
Lemma 5.
Assume the PJMP as described in (2)–(5). Then, for every z D ^
P s 0 t s E Δ i ( z ) ( L f ( z u ) I z u D ^ ) E z ( L f ( z u ) I z u D ^ ) d u 2 4 θ 2 t 2 M E x ( Γ ( f , f ) ( x t ) I x t D ^ )
for every t t 1 .
Proof. 
We can compute
I 2 : = P s 0 t s E Δ i ( z ) ( L f ( z u ) I z u D ^ ) E z ( L f ( z u ) I z u D ^ ) d u 2 2 P s 0 t y D ^ π u ( Δ i ( z ) , y ) | L f ( y ) | d u 2 + 2 P s 0 t y D ^ π u ( z , y ) | L f ( y ) | d u 2
Now we will use three times the Cauchy-Schwarz inequality, to pass the square inside the integral and the two sums. We will then obtain
I 2 2 θ 2 t M ω = z , Δ i ( z ) y D ^ 0 t P s π u ( ω , y ) i = 1 N ϕ ( y i ) f ( Δ i ( y ) ) f ( y ) 2 d u = 2 θ 2 t M ω = z , Δ i ( z ) y D ^ 0 t π s + u ( ω , y ) i = 1 N ϕ ( y i ) f ( Δ i ( y ) ) f ( y ) 2 d u
where above we used the semigroup property P s P u = P s + u . Since t t 1 , we can use Lemma 4 to bound for every w and y D ^ , π u + s ( w , y ) θ π t ( x , y ) . We then obtain
I 2 4 θ 2 t 2 M y D π t ( x , y ) i = 1 N ϕ ( y i ) f ( Δ i ( y ) ) f ( y ) 2 = 4 θ 2 t 2 M P t Γ ( f , f ) ( x ) .
 □

Proof of Theorem 2

We can now show the Poincaré inequality for the semigroup P t for initial configurations inside the domain D ^ of the invariant measure π .
Proof. 
We will work as in the proof of the Poincaré inequality of Theorem 1 for general initial conditions. As before, we denote P t f ( x ) = E x f ( x t ) . Then
E x ( f 2 ( x t ) I x t D ^ ) E x ( f ( x t ) I x t D ^ ) 2 = 0 t d d s P s E x s ( f ( x t s ) I x t s D ^ ) 2 ( x ) d s = 0 t P s Γ ( E x s ( f ( x t s ) I x t s D ^ ) , E x s ( f ( x t s ) I x t s D ^ ) ) ( x ) d s
since d d s P s = L P s = P s L . To bound the carré du champ, as in the general case of Theorem 1, from (19) and Dynkin’s formula we obtain the following
Γ ( E x s ( f ( x t s ) I x t s D ^ ) , E x s ( f ( x t s ) I x t s D ^ ) ) 2 Γ ( f , f ) ( x s ) + 2 i = 1 N ϕ ( x s i ) 0 t s E Δ i ( x s ) ( L f ( x u ) I x u D ^ ) E x s ( L f ( x u ) I x u D ^ ) d u 2 .
From that and (20) and the bound M = sup x D i = 1 N ϕ ( x i ) on ϕ , we then get
E x ( f 2 ( x t ) I x t D ^ ) E x ( f ( x t ) I x t D ^ ) 2 2 0 t P s Γ ( f , f ) ( x ) d s + 2 M i = 1 N 0 t P s 0 t s E Δ i ( x s ) ( L f ( x u ) I x u D ^ ) E x s ( L f ( x u ) I x u D ^ ) d u 2 d s .
Since t t 1 , we can use Lemma 5 to bound the second term on the right-hand side
E x ( f 2 ( x t ) I x t D ^ ) E x ( f ( x t ) I x t D ^ ) 2 2 0 t P s Γ ( f , f ) ( x ) d s + 8 θ 2 M 2 N t 3 P t Γ ( f , f ) ( x ) .
 □

4. Proof of the Poincaré Inequalities for the Invariant Measure

In this section, we prove a Poincaré inequality for the invariant measure π presented in Theorem 3, using methods developed in [20,24,25].
Proof. 
At first assume π ( f ) = 0 . We can write
V a r π ( f ) = f 2 d π = f 2 I D ^ d π .
We will follow the method from [16] used to prove Spectral Gap for finite Markov chains. Since π ( f ) = 0 , we can write
f 2 I D ^ d π = 1 2 ( f ( x ) f ( y ) ) 2 I x D ^ I y D ^ π ( d x ) π ( d y ) .
Consider δ ( x , y ) = { i 1 , i 2 , , i | γ ( x , y ) | } the shortest path from x D ^ to y D ^ , where the indexes i k stand for the neuron that spikes. Since D ^ is finite max { x , y D ^ : | δ ( x , y ) | } is finite. We denote x ˜ 0 = x and x ˜ k = Δ i k ( Δ i k 1 ( Δ i 1 ( x ) ) ) , for k = 1 , , | δ ( x , y ) | , so that x ˜ | δ ( x , y ) | = y . So, we can write
π ( x ) π ( y ) ( f ( x ) f ( y ) ) 2 π ( y ) π ( x ) j = 0 | δ ( x , y ) | ( f ( Δ ( x ˜ j ) i j ) f ( x ˜ j ) ) 2 π ( y ) π ( x ) δ j = 0 | δ ( x , y ) | φ ( x ˜ i j i ) ( f ( Δ ( x ˜ j ) i j ) f ( x ˜ j ) ) 2
where above we used that ϕ δ . We can now form the caré du champ
π ( x ) π ( y ) ( f ( x ) f ( y ) ) 2 π ( y ) π ( x ) δ j = 0 | δ ( x , y ) | i D φ ( x ˜ i j i ) ( f ( Δ ( x ˜ j ) i ) f ( x ˜ j ) ) 2 π ( y ) π ( x ) min { x D ^ : π ( x ) } δ j = 0 | δ ( x , y ) | π ( ( x ˜ j ) ) Γ ( f , f ) ( x ˜ j ) .
We then have
f 2 I D d π N 2 2 min { x D ^ : π ( x ) } δ x D π ( x ) Γ ( f , f ) ( x ) = N 2 2 min { x D ^ : π ( x ) } δ π ( Γ ( f , f ) I D ^ ) .
Putting all together leads to
V a r π ( f ) N 2 2 min { x D ^ : π ( x ) } δ Γ ( f , f ) d π .
 □

5. Discussion

In this paper, we study the probabilistic model introduced by Galves and Löcherbach in [1], in order to describe neural networks. In particular, we want to show that despite the degenerate nature of the process, one can still obtain Poincaré-type inequalities for the associated semigroup, similar to the ones obtained by [13,14,17] for point processes without degenerate jumps.
In terms of practical applications, the concentration inequality we have derived for the invariant measure π , implies that the process remains very close to its mean, while the Poincaré-type inequalities for the semigroup, imply that despite the degeneracy that characterizes the behavior of the neural system, after a long time passes the neurons behave closer to a system without degeneracy.
In the current paper we studied both the cases where the initial configuration, belongs and does not belong in the domain of the invariant measure. Future directions should focus on restricting to the special case where the initial configuration belongs exclusively in the domain of the invariant measure. Then, stronger inequalities from the Poincaré-type inequalities obtained here for the semigroup appear to be satisfied, as is the modified logarithmic Sobolev inequality.
Furthermore, the extension of the results obtained in the current paper for compact neurons, to the more general case of unbounded neurons is of particular interest.

Author Contributions

The authors contributed equally to this work.

Funding

This article was produced as part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (grant 2013/ 07699-0, S.Paulo Research Foundation); This article is supported by FAPESP grant (2016/17655-8) 1 and (2017/15587-8) 2 .

Acknowledgments

The authors thank Eva Löcherbach for careful reading and valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Galves, A.; Löcherbach, E. Infinite Systems of Interacting Chains with Memory of Variable Length-A Stochastic Model for Biological Neural Nets. J. Stat. Phys. 2013, 151, 896–921. [Google Scholar] [CrossRef]
  2. Chevalier, J. Mean-field limit of generalized Hawkes processes. Stoch. Process. Their Appl. 2017, 127, 3870–3912. [Google Scholar] [CrossRef] [Green Version]
  3. Duarte, A.; Löcherbach, E.; Ost, G. Stability, convergence to equilibrium and simulation of non-linear Hawkes Processes with memory kernels given by the sum of Erlang kernels. arXiv 2016, arXiv:1610.03300. [Google Scholar]
  4. Duarte, A.; Ost, G. A model for neural activity in the absence of external stimuli. Markov Process. Relat. Fields 2014, 22, 37–52. [Google Scholar]
  5. Hansen, N.; Reynaud-Bouret, P.; Rivoirard, V. Lasso and probabilistic inequalities for multivariate point processes. Bernoulli 2015, 21, 83–143. [Google Scholar] [CrossRef]
  6. Hodara, P.; Löcherbach, E. Hawkes processes with variable length memory and an infinite number of components. Adv. Appl. Probab. 2017, 49, 84–107. [Google Scholar] [CrossRef] [Green Version]
  7. Hodara, P.; Krell, N.; Löcherbach, E. Non-parametric estimation of the spiking rate in systems of interacting neurons. Stat. Inference Stoch. Process. 2016, 21, 81–111. [Google Scholar] [CrossRef] [Green Version]
  8. Davis, M.H.A. Piecewise-derministic Markov processes: A general class off nondiffusion stochastic models. J. R. Stat. Soc. Ser. B 1984, 46, 353–388. [Google Scholar]
  9. Davis, M.H.A. Markov models and optimization. In Monographs on Statistics and Applied Probability; Chapman & Hall: London, UK, 1993; Volume 49. [Google Scholar]
  10. Crudu, A.; Debussche, A.; Muller, A.; Radulescu, O. Convergence of stochastic gene networks to hybrid piecewise deterministic processes. Ann. Appl. Probab. 2012, 22, 1822–1859. [Google Scholar] [CrossRef]
  11. Pakdaman, K.; Thieulen, M.; Wainrib, G. Fluid limit theorems for stochastic hybrid systems with application to neuron models. Adv. Appl. Probab. 2010, 42, 761–794. [Google Scholar] [CrossRef] [Green Version]
  12. Azaïs, R.; Bardet, J.B.; Genadot, A.; Krell, N.; Zitt, P.A. Piecewise deterministic Markov process (pdmps). Recent results. Proceedings 2014, 44, 276–290. [Google Scholar]
  13. Ane, C.; Ledoux, M. On logarithmic Sobolev inequalities for continuous time random walks on graphs. Probab. Theory Relat. Fields 2000, 116, 573–602. [Google Scholar]
  14. Chafai, D. Entropies, convexity, and functional inequalities. J. Math. Kyoto Univ. 2004, 44, 325–363. [Google Scholar] [CrossRef]
  15. Diaconis, P.; Saloff-Coste, L. Logarithmic Sobolev inequalities for finite Markov Chains. Ann. Appl. Probab. 1996, 6, 695–750. [Google Scholar] [CrossRef]
  16. Saloff-Coste, L. Lectures on finite Markov chains. In IHP Course 98, Ecole d’ Ete de Probabilites de Saint-Flour XXVI, Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1996; Volume 1665, pp. 301–413. [Google Scholar]
  17. Wang, F.-Y.; Yuan, C. Poincaré inequality on the path space of Poisson point processes. J. Theor. Probab. 2010, 23, 824–833. [Google Scholar] [CrossRef]
  18. Guionnet, A.; Zegarlinski, B. Lectures on Logarithmic Sobolev Inequalities. In IHP Course 98, in Seminare de Probabilite XXVI, Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2003; Volume 1801, pp. 1–134. [Google Scholar]
  19. Bakry, D.; Emery, M. Difusions hypercontractives. In Seminaire de Probabilites XIX, Springer Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1985; Volume 1123, pp. 177–206. [Google Scholar]
  20. Bakry, D.; Gentil, I.; Ledoux, M. Analysis and geometry of Markov diffusion operators. In Grundlehren der Mathematischen Wissenschaften; Springer: Berlin/Heidelberg, Germany, 2014; Volume 348. [Google Scholar]
  21. Diaconis, P.; Saloff-Coste, L. Geometric bounds for eigenvalues of Markov Chains. Ann. Probab. 1991, 1, 36–61. [Google Scholar] [CrossRef]
  22. Bakry, D. L’hypercontructivité et son utilisation en théorie des semigroupes. Ecole d’Eté de Probabilités de St-Flour. Lecture Notes Math. 1994, 1581, 1–114. [Google Scholar]
  23. Bakry, D. On Sobolev and logarithmic Sobolev inequalities for Markov semigroups. New Trends Stoch. Anal. 1997, 43–75. [Google Scholar]
  24. Bakry, D.; Cattiaux, P.; Guillin, A. Rate of convergence for ergodic continuous Markov processes: Lyapunov versus Poincaré. J. Funct. Anal. 2008, 254, 727–759. [Google Scholar] [CrossRef]
  25. Cattiaux, P.; Guillin, A.; Wang, F.-Y.; Wu, L. Lyapunov conditions for super Poincaré inequality. J. Funct. Anal. 2009, 256, 1821–1841. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Hodara, P.; Papageorgiou, I. Poincaré-Type Inequalities for Compact Degenerate Pure Jump Markov Processes. Mathematics 2019, 7, 518. https://doi.org/10.3390/math7060518

AMA Style

Hodara P, Papageorgiou I. Poincaré-Type Inequalities for Compact Degenerate Pure Jump Markov Processes. Mathematics. 2019; 7(6):518. https://doi.org/10.3390/math7060518

Chicago/Turabian Style

Hodara, Pierre, and Ioannis Papageorgiou. 2019. "Poincaré-Type Inequalities for Compact Degenerate Pure Jump Markov Processes" Mathematics 7, no. 6: 518. https://doi.org/10.3390/math7060518

APA Style

Hodara, P., & Papageorgiou, I. (2019). Poincaré-Type Inequalities for Compact Degenerate Pure Jump Markov Processes. Mathematics, 7(6), 518. https://doi.org/10.3390/math7060518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop