Next Article in Journal
On Global Well-Posedness and Temporal Decay for 3D Magnetic Induction Equations with Hall Effect
Next Article in Special Issue
Geometric Characterization of Injective Banach Lattices
Previous Article in Journal
Simultaneous Feature Selection and Classification for Data-Adaptive Kernel-Penalized SVM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Boolean Valued Representation of Random Sets and Markov Kernels with Application to Large Deviations

by
Antonio Avilés López
1 and
José Miguel Zapata García
2,*
1
Departamento de Matematica, Universidad de Murcia, Espinardo, 30100 Murcia, Spain
2
School of Mathematics and Statistics, University College Dublin, Belfield, 58622 Dublin 4, Ireland
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(10), 1848; https://doi.org/10.3390/math8101848
Submission received: 31 August 2020 / Revised: 7 October 2020 / Accepted: 13 October 2020 / Published: 20 October 2020
(This article belongs to the Special Issue Boolean Valued Analysis with Applications)

Abstract

:
We establish a connection between random set theory and Boolean valued analysis by showing that random Borel sets, random Borel functions, and Markov kernels are respectively represented by Borel sets, Borel functions, and Borel probability measures in a Boolean valued model. This enables a Boolean valued transfer principle to obtain random set analogues of available theorems. As an application, we establish a Boolean valued transfer principle for large deviations theory, which allows for the systematic interpretation of results in large deviations theory as versions for Markov kernels. By means of this method, we prove versions of Varadhan and Bryc theorems, and a conditional version of Cramér theorem.

1. Introduction

A situation which often arises in probability theory is the necessity to generalize a known unconditional theorem to a setting which is not unconditional any more. Instead, there is an infinity of models depending on some parameter ω Ω , for a probability space ( Ω , F , P ) , and the theorem has to be applied simultaneously to ‘almost all’ ω Ω . The formalization of this approach has motivated new developments in probability theory such as [1,2]. In [3], the connection between the algebra of conditional sets [1,4] and Boolean valued analysis [5] was provided. In the present paper, we provide a similar connection for the framework of random set theory [2]. We aim to show that the well-known set-theoretic techniques of Boolean valued analysis are perfectly suited to the type of applications of random set theory. The advantage is that the present approach allows for applying the full power of the set-theoretical methods. In particular, the so-called transfer principle of Boolean valued models provides a tool for expanding the content of already available theorems to non-obvious analogues in random set theory.
To reach this aim, we study the Boolean valued representation of different objects in random set theory. First, we study the Boolean valued representations of random sets. The notion of a random set gives meaning to random objects X whose realizations X ( ω ) , ω Ω , take values as subsets of some space X ; see [2]. These objects have an important role in mathematical finance and stochastic optimization; see e.g., [6,7,8,9,10]. By considering the Boolean valued model associated with the underlying probability space, we show that a random set corresponds to a Borel set in this model. Moreover, we characterize when a random set corresponds to an open set, a closed set, or a compact set in the model. Similar connections are provided for random Borel functions. Second, we study the Boolean valued representation of Markov kernels in the model. Namely, we prove that a Markov kernel corresponds to a Borel probability measure in the model. Moreover, the Lebesgue integral of a random Borel function with respect to a Markov kernel corresponds to the Lebesgue integral of a real-valued function with respect to the corresponding Borel probability measure in the model. Third, we study the Boolean valued representation of Markov kernels that stem from regular conditional distributions of real-valued random variables. Namely, we show that a random variable corresponds to a random variable in a Boolean valued model so that its regular conditional probability corresponds to its probability distribution in the model. Moreover, this correspondence preserves arithmetic operations, almost sure convergence, and sends conditionally independent random variables to (unconditional) independent random variables in the model.
In probability theory, large deviations theory concerns the asymptotic behavior of sequences of probability distributions; see, e.g., [11]. By means of the previous Boolean valued representations and the Boolean valued transfer principle, we prove versions for sequences of Markov kernels of basics results in large deviations theory. Namely, we obtain versions of Varadhan’s and Bryc’s theorems and a conditional version of Cramér’s theorem for the sequence of sample means of a conditionally independent identically distributed sequence of real-valued random variables. We emphasize that these results are just instances of application of the transfer principle, whose number can be easily increased. We point out that large deviations results for Markov kernels are particularly important in the theory of random walks in random media and are called ‘quenched’ large deviations principles; see, e.g., [12].
The paper is organized as follows: in Section 2, we recall some basics of Boolean valued analysis. Section 3 is devoted to some preliminaries on random sets. In Section 4, we study the Boolean valued representation of random sets and random functions. In Section 5, we study the Boolean valued representation of Markov kernels. In Section 6, we study the Boolean valued representation of regular conditional probability distributions. Finally, in Section 7, a Boolean valued transfer principle for large deviations of sequences of Markov kernels is provided.

2. Basics of Boolean Valued Analysis

The main tool of Boolean valued analysis is Boolean valued models of set theory. The precise formulation of Boolean valued models requires some familiarity with the basics of set theory and logic, and, in particular, with first-order logic, ordinals and transfinite induction. For the convenience of the reader, we will give some background. All the principles and results in this section are well-known, and details can be found in [5].
Let us consider a universe of sets V satisfying the axioms of the Zermelo–Fraenkel set theory with the axiom of choice (ZFC), and a first-order language L , which allows for the formulation of statements about the elements of V . In the universe V , we have all possible mathematical objects (real numbers, topological spaces, and so on). The language L consists of names for the elements of V together with a finite list of symbols for logic symbols (∀, ∧, ¬, and parentheses), variables, and the predicates = and ∈. Though we usually use a much richer language by introducing more and more intricate definitions, in the end, any usual mathematical statement can be written using only those mentioned. The elements of the universe V are classified into a transfinite hierarchy: V 0 V 1 V 2 V ω V ω + 1 , where V 0 = , V α + 1 = P ( V α ) is the family of all sets whose elements come from V α , and V β = α < β V α for limit ordinal β .
In the following, we consider an underlying probability space ( Ω , F , P ) , which is a member of the universe V . Then, the associated measure algebra F : = F / P 1 ( 0 ) is a complete Boolean algebra with ‘unity’ Ω ¯ and ‘zero’ ¯ , where Ω ¯ : = Ω / P 1 ( 0 ) and ¯ : = / P 1 ( 0 ) , and Boolean operations
( A / P 1 ( 0 ) ) ( B / P 1 ( 0 ) ) = ( A B ) / P 1 ( 0 ) , ( A / P 1 ( 0 ) ) ( B / P 1 ( 0 ) ) = ( A B ) / P 1 ( 0 ) ,
( A / P 1 ( 0 ) ) c : = ( Ω A ) / P 1 ( 0 )
for A , B F . As usual, we write A B whenever A B = A . Furthermore, F has the countable chain condition, that is, any partition of the unity Ω ¯ by elements of F is at most countable. Although we focus on the case of the measure algebra associated with a probability space, the following constructions and principles work for any complete Boolean algebra, by replacing countable partitions by arbitrary partitions.
The Boolean valued universe V ( F ) is constructed by transfinite induction over the class Ord of ordinals of the universe V . We start by defining V 0 ( F ) : = . If α + 1 is the successor of the ordinal α , we define V α + 1 ( F ) to be the set of all functions u : D F with D V α ( F ) . If α is a limit ordinal, V α ( F ) : = β < α V β ( F ) . Finally, let V ( F ) : = α O r d V α ( F ) . Given u in V ( F ) , we define its rank as the least ordinal α such that u is in V α + 1 ( F ) .
We consider a first-order language which allows us to produce statements about V ( F ) . Namely, let L ( F ) be the first-order language which is the extension of L by adding names for each element in V ( F ) . Throughout, we will not distinguish between an element in V ( F ) and its name in L ( F ) . Thus, hereafter, the members of V ( F ) will be referred to as names.
Suppose that φ is a formula in set theory, that is, φ is constructed by applying logical symbols to atomic formulas u = v and u v . If φ does not have any free variable and all the constants in φ are names in V ( F ) , then we define its Boolean truth value, say φ , which is a member of F and is constructed by induction in the length of φ by naturally giving Boolean meaning to the predicates = and ∈, the logical connectives, and the quantifiers. Namely, the Boolean truth value of the atomic formulas u v and u = v for u and v in V ( F ) is defined by transfinite recursion as follows:
u v = t dom ( v ) v ( t ) t = u ,
u = v = t dom ( u ) u ( t ) t v t dom ( v ) v ( t ) t u ,
where, for A , B F , we denote A B : = A c B . For nonatomic formulas, we have
( x ) φ ( x ) : = u V ( F ) φ ( u ) and ( x ) φ ( x ) : = u V ( F ) φ ( u ) ;
φ ψ : = φ ψ , φ ψ : = φ ψ , φ ψ : = φ c ψ and ¬ φ : = φ c .
We say that a formula φ is satisfied in the model V ( F ) , whenever it is true with the Boolean truth value, i.e., φ = Ω ¯ . Say that two names u , v are equivalent when u = v = Ω ¯ . It is not difficult to verify that the Boolean truth value of a formula is not affected when we change a name by an equivalent one. However, the relation u = v = Ω ¯ does not mean that the functions u and v coincide. In order to overcome these difficulties, we will consider the separated universe. Namely, let V ¯ ( F ) be the subclass of V ( F ) defined by choosing a representative of the least rank in each class of the equivalence relation { ( u , v ) : u = v = Ω ¯ } .
In the model V ( F ) , we have all possible mathematical objects (real numbers, topological spaces, and so on), and a full mathematical discussion is possible. For instance, if a name u satisfies that u i s a v e c t o r s p a c e = Ω ¯ , we will say that u is a vector space in the model V ( F ) . If f, u, and v are names such that f : u v = Ω ¯ , we will say that f is a function from u to v in the model V ( F ) . Throughout, we will use this terminology for different mathematical objects without further explanations.

2.1. Principles in the Universe  V ( F )

Next, we recall some important principles.
Theorem 1.
(Transfer Principle) If φ is a theorem of ZFC, then φ holds in the model V ( F ) .
The transfer principle tells us that any available theorem holds true in the model V ( F ) .
Theorem 2.
(Maximum Principle) Let φ ( x 0 , x 1 , , x n ) be a formula with free variables x 0 , x 1 , , x n and suppose that u 1 , , u n V ( F ) . Then, there exists u 0 V ( F ) such that ( x ) φ ( x , u 1 , , u n ) = φ ( u 0 , u 1 , , u n ) .
Theorem 3.
(Mixing Principle) Let ( A k ) F be a countable partition of Ω ¯ and ( u k ) a sequence of names. Then, there exists a unique member u of V ¯ ( F ) such that A k u = u k for all k N .
Given a countable partition ( A k ) F of Ω ¯ and a sequence ( u k ) of elements of V ( F ) , we denote by u k A k , the unique name u in V ¯ ( F ) satisfying A k u = u k for all k N .

2.2. Descent Operation

Given a member u of V ( F ) with u = Ω ¯ we define its descent by
u : = { v V ¯ ( F ) : v u = Ω ¯ } .

2.3. Ascent Operation

Consider a nonempty set X of members of V ¯ ( F ) . We define the ascent X of X to be the unique representative in V ¯ ( F ) of the name given by the function
X F , u Ω .
Then, it is satisfied that X = x k A k : ( A k ) F is a countable partition of   Ω ¯ , ( x k ) X .

2.4. Boolean Valued Representation

Definition 1.
A nonempty set X is called a stable F -set if a function Δ : X × X F exists such that:
 1. 
Δ ( x , y ) = Δ ( y , x ) ;
 2. 
Δ ( x , y ) = Ω iff x = y ;
 3. 
Δ ( x , z ) Δ ( x , y ) Δ ( y , z ) ;
 4. 
for every countable partition ( A k ) F of Ω ¯ and sequence ( x k ) X , there exists a unique x X such that Δ ( x k , x ) A k for all k N .
Given a sequence ( x k ) X and a countable partition ( A k ) F of Ω ¯ , we denote by x k A k the unique element given by (4) above.
Remark 1.
 1. 
The requirement in (4) above that x is unique is superfluous as the uniqueness can be proven from (1) above; for more details, see, e.g., ([5], Section 3.4.2).
 2. 
A stable F -set can be reformulated as a Boolean metric space by considering the Boolean metric d ( x , y ) : = Δ ( x , y ) c .
 3. 
If u V ¯ ( F ) satisfies that u = Ω ¯ , then, due to the mixing principle, u is a stable F -set for Δ ( x , y ) : = x = y .
Definition 2.
Suppose that X, Y are stable F -sets. A function f : X Y is said to be stable if f x k A k = f ( x k ) A k for every sequence ( x k ) X and countable partition ( A k ) F of Ω ¯ .
A name u V ¯ ( F ) is said to be a Boolean valued representation of the stable F -set X (or that X is a Boolean valued interpretation of u) if there exists a stable bijection
X u , x x .
Proposition 1.
Every stable F -set X admits a Boolean valued representation X , which is unique up to bijections in the model V ( F ) .
Let X and Y be stable F -sets with Boolean valued representations X and Y , respectively.
  • Suppose that ( X × Y ) F is a name for the Cartesian product of X and Y in the model V ¯ ( F ) . Then, ( X × Y ) : = ( X × Y ) F is a Boolean representation of the stable F -set X × Y . (Notice that X × Y is a stable F -set by setting Δ X × Y ( ( x , y ) , ( x , y ) ) = Δ X ( x , x ) Δ Y ( y , y ) .) More precisely, there exists a bijection ( x , y ) ( x , y ) : X × Y ( X × Y ) F such that
    ( x , y ) = ( x , y ) = Ω ¯ for all ( x , y ) X × Y .
  • A nonempty subset S X is said to be stable if for every countable partition ( A k ) F of Ω ¯ and sequence ( x k ) S it holds that x k A k is again an element of S. Given a stable set S X , we define S : = { x : x S } . Then, S is a nonempty subset of X in the model V ( F ) . Due to the mixing principle, one has that x x is a stable bijection between S and S . In addition, the correspondence S S is a bijection between the class of stable subsets of X and the class of names for nonempty subsets of X in the model V ¯ ( F ) .
  • Suppose that f : X Y is stable. Then, there exists a unique member f of V ¯ ( F ) such that f is a function from X to Y in the model V ( F ) with f ( u ) = f ( u ) = Ω ¯ for all u X .
Remark 2.
The notion of stability plays a key role in related frameworks. In Boolean valued analysis, the terminology cyclic or universally complete A -sets is used (here A is a complete Boolean algebra, for instance, we can take A = F ), see [5]. In conditional set theory, the terminology stable set, stable function, and stable collection are used, see [4]. In fact, the notion of conditional set is a reformulation of that of cyclic A -set; see ([3], Theorem 3.1) and ([3], Remark 3.1). In the theory of L 0 -modules, the notion of stability is called countable concatenation property; see [13], and, in random set theory, is called countable decomposability; see [2].

2.5. Manipulation of Boolean Truth Values

We recall some useful rules to manipulate Boolean truth values. Suppose that X is a stable F -set. Given a nonempty subset Y of X, we define the stable hull of Y to be
s ( Y ) : = y k A k : ( A k ) F is a countable partition of   Ω ¯ , ( y k ) Y .
Then, s ( Y ) is the smallest stable subset of X that contains Y. In particular, we can consider the bijection y y of s ( Y ) into its Boolean valued representation s ( Y ) .
Proposition 2.
Let φ ( x 0 , x 1 , , x n ) be a formula with free variables x 0 , x 1 , , x n , and X a stable F -set. Then, given a nonempty subset Y of X and u 1 , , u n V ( F ) ,
( x s ( Y ) ) φ ( x , u 1 , , u n ) = y Y φ ( y , u 1 , , u n ) ,
( x s ( Y ) ) φ ( x , u 1 , , u n ) = y Y φ ( y , u 1 , , u n ) .
is satisfied. Moreover, it holds that
  • 1. ( x s ( Y ) ) φ ( x , u 1 , , u n ) = Ω ¯ if and only if φ ( y , u 1 , , u n ) = Ω ¯ for all y Y ;
  • 2. ( x s ( Y ) ) φ ( x , u 1 , , u n ) = Ω ¯ if and only if φ ( y , u 1 , , u n ) = Ω ¯ for some y s ( Y ) .

2.6. Boolean Valued Numbers

Denote by L 0 ( F ; N ) , L 0 ( F ; Q ) , L 0 ( F ; R ) and L 0 ( F ; R ¯ ) the spaces of classes of equivalence of P —almost surely equal F -measurable random variables with values in the natural numbers, rational numbers, real numbers, and extended real numbers, respectively. Given η , ξ L 0 ( F ; R ¯ ) , the inequalities η ξ and η < ξ are understood in the almost sure sense. It is well known that any nonempty subset S of L 0 ( F ; R ) has a (unique) supremum in L 0 ( F ; R ¯ ) for the particular other ≤, which we denote by ess . sup S ; see e.g., ([14], Section A.5). In particular, L 0 ( F ; R ) is a Dedekind complete ring lattice for the partial order ≤. Similarly, we denote by ess . inf S the infimum of S. For η , ξ L 0 ( F ; R ¯ ) , we set
  • { η = ξ } : = { ω : η 0 ( ω ) = ξ 0 ( ω ) } / P 1 ( 0 ) ,
  • { η ξ } : = { ω : η 0 ( ω ) ξ 0 ( ω ) } / P 1 ( 0 ) ,
where η 0 , ξ 0 are representatives arbitrarily chosen of η , ξ , respectively.
By applying the maximum and transfer principles, there exist names N F , Q F , R F , and R ¯ F for the sets of natural numbers, rational numbers, real numbers, and extended real numbers, respectively, in the model V ( F ) . Takeuti [15] proved that R F is a Boolean valued representation of L 0 ( F ; R ) . This fact amounts to the following; see [3,16]. There exists a bijection
ι F : L 0 ( F ; R ¯ ) R ¯ F , ξ ξ
such that
(R1)
( 1 A k ξ k ) = ξ k A k for every sequence ( ξ k ) L 0 ( F ; R ¯ ) and countable partition ( A k ) F of Ω ¯ . (Here, by convention we set 0 ( ± ) : = 0 .)
(R2)
0 = 0 = Ω ¯ , n = n = Ω ¯ for every n N .
(R3)
ι F bijects L 0 ( F ; N ) into N F .
(R4)
ι F bijects L 0 ( F ; Q ) into Q F .
(R5)
ι F bijects L 0 ( F ; R ) into R F .
(R6)
ξ + η = ς = { ξ + η = ς } , and ξ η = ς = { ξ η = ς } for every ξ , η , ς L 0 ( F ; R ) .
(R7)
ξ = η = { ξ = η } , and ξ η = { ξ η } for every ξ , η L 0 ( F ; R ¯ ) .
(R8)
If S L 0 ( F ; R ) is stable; then,
( ess . sup S ) = sup S = Ω ¯ , ( ess . inf S ) = inf S = Ω ¯ .
Remark 3.
 1. 
Since N and Q are countable, we have that L 0 ( F ; N ) = s ( N ) and L 0 ( F ; Q ) = s ( Q ) . Then, in view of Proposition 2, we can reduce all essentially countable quantifiers in the model V ( F ) , like n N , q Q ..., to check constant names for n N , q Q ,... This type of manipulation of Boolean truth values will be done throughout without further explanations.
 1. 
Consider a member u of V ( F ) with u = Ω ¯ . Suppose that ( v n ) is a sequence of elements of u . For any n L 0 ( F ; N ) , define v n : = k N v k { n = k } . The function
v · : N F u , n v n
is well-defined due to (R3) and stable due to (R1). Then, we can consider v · , which is a name for a sequence in the model V ( F ) such that v · ( n ) = v n = Ω ¯ for all n N . Conversely, suppose that w is a sequence of elements of v in the model V ( F ) , i.e., w : N u = Ω ¯ . Then, we can consider a sequence ( u n ) v with u n = w ( n ) = Ω ¯ for each n N .
Remark 4.
A fundamental result in Boolean valued analysis is the so-called Gordon’s theorem, which states that the field of real numbers of a Boolean valued universe (associated with an arbitrary complete Boolean algebra) is the Boolean valued representation of a universally complete vector lattice [17]. This result was complemented by Kusraev and Kutateladze by establishing that any universally complete vector lattice is the interpretation of the field of real numbers in a suitable Boolean valued universe; see e.g., ([5], Section 5.2). In the particular case of the model V ( F ) , the universally complete vector lattice L 0 ( F ; R ) is the interpretation of the field of real numbers R F in the model V ( F ) , which was fruitfully exploited by Takeuti [15]. For more details about the Gordon’s theorem, we refer to [18].

3. Preliminaries on Random Sets

We next recall some basics of random sets, for a detailed account, we refer to [2]. Hereafter, ( Ω , F , P ) is a complete probability space. (If ( Ω , F , P ) is not complete, we can always consider the completion F ˜ : = F P 1 ( 0 ) and the corresponding extension P ˜ of P on F ˜ . Notice that F and F ˜ produce the same measure algebra F and, consequently, the same model V ( F ) .) Throughout, X is an infinite Polish space (i.e., a separable completely metrizable topological space). We denote by L 0 ( F ; X ) the space of classes of equivalence of F -measurable random variables with values in X , and by B ( X ) the Borel σ -algebra of X . For a sequence ( ξ n ) L 0 ( F ; X ) , we write lim n ξ n = ξ whenever lim n ξ n ( ω ) = ξ ( ω ) for a.e. ω Ω .
We consider the product σ -algebra F B ( X ) . Under the present assumptions, projections onto Ω are measurable and measurable selectors exist. Namely, the following proposition holds true; see, e.g., ([19], Theorem 5.4.1).
Proposition 3.
for every M F B ( X ) the following is satisfied:
 (A) 
the projection π Ω ( M ) onto Ω is an element of F ;
 (B) 
there exists ξ L 0 ( F ; X ) such that ξ ( ω ) M ω for a.e. ω π Ω ( M ) , where M ω = { x : ( ω , x ) M } denotes the ω-section of M.
A set-valued mapping X : Ω X is said to be an F -measurable random Borel set (shortly, random Borel set) if its graph
Gph ( X ) : = ( ω , x ) Ω × X : x X ( ω )
is an element of F B ( X ) . Throughout, we identify two random Borel sets X , Y whenever X ( ω ) = Y ( ω ) for a.e. ω Ω . We denote by B F ( X ) the set of all (equivalence classes of) random Borel sets. Given X B F ( X ) , we say that ξ L 0 ( F ; X ) is an a.s. F -measurable selector of X if ξ ( ω ) X ( ω ) for a.e. ω Ω . We denote by L 0 ( F ; X ) the set of all a.s. F -measurable selectors of X. Due to (B) in Proposition 3, L 0 ( F ; X ) is nonempty whenever X is a.s. nonempty. Regarding a set E B ( X ) as a constant set-valued mapping, we denote by L 0 ( F ; E ) the set of classes of equivalence of F -valued variables with values in E. Suppose that X B F ( X ) :
  • Say that X is a random closed set if X ( ω ) is closed for a.e. ω Ω ;
  • say that X is a random open set if X ( ω ) is open for a.e. ω Ω ;
  • say that X is a random compact set if X ( ω ) is compact for a.e. ω Ω .
Suppose that ξ L 0 ( F ; X ) and X , Y B F ( X ) and let ξ 0 , X 0 , Y 0 be representatives arbitrarily chosen of ξ , X , Y , respectively. We write
  • { ξ X } : = { ω Ω : ξ 0 ( ω ) X 0 ( ω ) } / P 1 ( 0 ) ;
  • { X = Y } : = { ω Ω : X 0 ( ω ) = Y 0 ( ω ) } / P 1 ( 0 ) ;
  • { X Y } : = { ω Ω : X 0 ( ω ) Y 0 ( ω ) } / P 1 ( 0 ) .
We say that a function F : Ω × X R is:
  • A random Borel function if F is measurable, where Ω × X and R are endowed with the σ -algebras F B ( X ) and B ( R ) , respectively;
  • essentially bounded if there exists η L 0 ( F ; R ) such that, for every ξ L 0 ( F ; X ) ,
    | F ( ω , ξ ( ω ) ) | η ( ω ) for a.e.   ω Ω .
    holds.
Two random Borel functions F , G are identified whenever F ( ω , · ) = G ( ω , · ) for a.e. ω Ω . We denote by B F R ( X ) the set of (equivalence classes of) random Borel functions, and by B b , F R ( X ) the set of (equivalence classes of) functions F B F R ( X ) which are essentially bounded.
Given F , G B F R ( X ) , we write F G whenever F ( ω , · ) G ( ω , · ) for a.e. ω Ω . Then, the binary relation ≤ is a partial order on B F R ( X ) .
Given a sequence ( F n ) in B F R ( X ) , we write lim n F n = F for F B F ( X ) if for a.e. ω Ω it is satisfied that lim n F ( ω , x ) = F ( ω , x ) for all x X .
We say that F B F R ( X ) is random continuous if, for every sequence ( ξ n ) L 0 ( F ; X ) , we have
lim n F ( ω , ξ n ( ω ) ) = F ( ω , ξ ( ω ) ) for a.e.   ω Ω
whenever
lim n ξ n ( ω ) = ξ ( ω ) for a.e.   ω Ω .
Denote by C F R ( X ) (resp. C b , F R ( X ) ) the set of all random continuous functions F in B F R ( X ) (resp. B b , F R ( X ) ).

4. Boolean Valued Representation of Random Sets and Random Functions

For every sequence ( ξ k ) L 0 ( F ; X ) and countable partition ( A k ) F of Ω ¯ , denote by 1 A k ξ k the unique member ξ L 0 ( F ; X ) such that ξ ( ω ) = ξ k ( ω ) for a.e. ω A k , for all k N . Then, the space L 0 ( F ; X ) is a stable F -set by defining Δ ( ξ , η ) : = { ξ = η } . Therefore, due to Proposition 1, L 0 ( F ; X ) admits a Boolean valued representation, say L 0 ( F ; X ) . More specifically, there exists a bijective mapping
L 0 ( F ; X ) L 0 ( F ; X ) , ξ ξ
such that 1 A k ξ k = ξ k A k for all countable partition ( A k ) F of Ω ¯ and sequence ( ξ k ) L 0 ( F ; X ) .

4.1. Boolean Valued Representation of Random Borel Sets

Takeuti [15] showed that the elements of the product σ -algebra F B ( R ) correspond to real Borel sets in the model V ( F ) . Next, we extend this result to an arbitrary Polish space X by showing that each random Borel set X B F ( X ) corresponds to a Borel subset of L 0 ( F ; X ) in the model V ( F ) . Furthermore, we characterize open, closed, and compact sets of L 0 ( F ; X ) in the model V ( F ) .
Given X B F ( X ) , we denote by X the unique element of V ¯ ( F ) equivalent to the name given by the function
L 0 ( F ; X ) F , ξ { ξ X } .
If X is a.s. nonempty, then L 0 ( F ; X ) is a stable set. In that case, it is not difficult to show that L 0 ( F ; X ) = X = Ω ¯ . A manipulation of Boolean truth values shows the following.
Proposition 4.
Suppose that X , Y , Z B F ( X ) , then:
  • 1. ξ X = { ξ X } for all ξ L 0 ( F ; X ) ;
  • 2. X Y * = { X Y } ;
  • 3. X Y = Z = { X Y = Z } ;
  • 4. X Y = Z = { X Y = Z } ;
  • 5. ( X ) c = Z = { X c = Z } .
Due to (2) in Remark 3, every sequence ( X n ) B F ( X ) corresponds to a sequence X · of subsets of L 0 ( F ; X ) in the model V ( F ) . A manipulation of Boolean truth values bearing in mind (1) in Remark 3 yields the following.
Proposition 5.
Let ( X n ) B F ( X ) be a sequence and Y B F ( X ) . Then,
  • 1. n X n = Y = { n X n = Y } ;
  • 2. n X n = Y = { n X n = Y } .
Let d : X × X R be any metric compatible with the topology of X . Consider the random metric
d ˜ : L 0 ( F ; X ) × L 0 ( F ; X ) L 0 ( F ; R ) ,
given by d ˜ ( ξ , η ) ( ω ) : = d ( ξ ( ω ) , η ( ω ) ) for a.e. ω Ω . Then, d ˜ is a stable function and therefore we can consider a name d ˜ with d ˜ : L 0 ( F ; X ) × L 0 ( F ; X ) R = Ω ¯ . Furthermore, a manipulation of Boolean truth values shows that, in the model V ( F ) , d ˜ is a metric on L 0 ( F ; X ) .
Suppose that ( ξ n ) S is a sequence in a stable subset S. Due to (2) in Remark 3, ξ · is a name for a sequence in S in the model V ( F ) . As a consequence of Takeuti ([15], Proposition 2.2.1), we have the following.
Lemma 1.
Let ( ξ n ) be a sequence in L 0 ( F ; X ) . Then, lim n d ˜ ( ξ n , ξ ) = 0 iff lim n d ˜ ( ξ n , ξ ) = 0 = Ω ¯ .
Lemma 2.
Let X , Y B F ( X ) be a.s. nonempty. If L 0 ( F ; X ) = L 0 ( F ; Y ) , then X ( ω ) = Y ( ω ) for a.e. ω Ω .
Proof. 
Let X 0 , Y 0 be representatives of X, Y, respectively. Define
M : = { ( ω , x ) Ω × R : x X 0 ( ω ) , x Y 0 ( ω ) } .
Due to (A) in Proposition 3, the projection π Ω ( M ) is an element of F . Then, P ( π Ω ( M ) ) = 0 . Otherwise, due to (B) in Proposition 3, we can find ξ L 0 ( F ; X ) such that ξ L 0 ( F ; Y ) , which contradicts our assumption. □
Definition 3.
We say that a subset S L 0 ( F ; X ) is sequentially closed if for every sequence ( ξ n ) S such that lim n d ˜ ( ξ n , ξ ) = 0 it holds that ξ S .
In the case X = R d ( d N ), Kabanov and M. Safarian ([19], Proposition 5.4.3) proved that a stable set is sequentially closed if and only if it is the set of measurable selectors of a random closed set. This was generalized in [20], Theorem 5.1 to an arbitrary Polish space X ; see also ([16], Theorem 5.4.1). We complement this result by providing the corresponding Boolean valued representation.
Proposition 6.
Suppose that S L 0 ( F ; X ) is stable. The following conditions are equivalent:
 1. 
There exists a random closed set X such that S = L 0 ( F ; X ) ;
 2. 
S is sequentially closed;
 3. 
3. S L 0 ( F ; X ) is closed   = Ω ¯ .
In that case, if L 0 ( F ; X ) = L 0 ( F ; Y ) , then X ( ω ) = Y ( ω ) for a.e. ω Ω .
Proof .
( 1 ) ( 2 ) is precisely ([20], Theorem 5.1). ( 2 ) ( 3 ) follows by a manipulation of Boolean truth values taking into account Lemma 1 and Remark 3. □
Definition 4.
We say that S L 0 ( F ; X ) is open if for every ξ S there exists ε L 0 ( F ; ( 0 , + ) ) such that { η L 0 ( F ; X ) : d ˜ ( ξ , η ) ε } S .
Proposition 7.
Suppose that S L 0 ( F ; X ) is stable. The following conditions are equivalent:
 1. 
There exists a random open set X such that S = L 0 ( F ; X ) ;
 2. 
S is open;
 3. 
S L 0 ( F ; X ) i s o p e n = Ω ¯ .
In that case, if L 0 ( F ; X ) = L 0 ( F ; Y ) , then X ( ω ) = Y ( ω ) for a.e. ω Ω .
Proof .
( 2 ) ( 3 ) is verified by a manipulation of Boolean truth values.
( 1 ) ( 3 ) : Let M : = Gph ( X c ) . Due to (A) in Proposition 3, Ω 0 : = π Ω ( M ) is an element of F . Consider the random Borel set Y where Y ( ω ) : = X c ( ω ) for a.e. ω Ω 0 and Y ( ω ) : = X for a.e. ω Ω 0 c . Then, Y is a random closed set and, consequently, L 0 ( F ; Y ) is a closed set in the model V ( F ) due to Proposition 6. On the one hand, we have
L 0 ( F ; X ) c = L 0 ( F ; Y c ) = Ω ¯ 0 .
On the other hand, L 0 ( F ; X ) c = = Ω ¯ 0 c . Thus, L 0 ( F ; X ) c i s c l o s e d = Ω ¯ , hence L 0 ( F ; X ) is an open set in the model V ( F ) .
( 3 ) ( 1 ) : If S is open   = Ω ¯ , then S c is closed   = Ω ¯ . Define
C : = { ξ L 0 ( F ; X ) : ξ S c = A }
where A : = S c . Then, C is a stable set such that C i s c l o s e d = Ω ¯ . Therefore, there exists a random closed set Y such that C = L 0 ( F ; Y ) due to Proposition 6. Let X B F ( X ) be such that X ( ω ) : = Y c ( ω ) for a.e. ω A and X ( ω ) : = X for a.e. ω A c . Then, X is a random open set such that S = L 0 ( F ; X ) . □
Definition 5.
We say that S L 0 ( F ; X ) is stably compact if S is stable and, for every sequence ( ξ n ) S , there exist ξ S and a sequence ( n k ) L 0 ( F ; N ) with n 1 < n 2 < such that lim k d ˜ ( ξ n k , ξ ) = 0 . Here, ξ n = k N ξ k { n = k } for n L 0 ( F ; N ) .
The notion of stable compactness is standard in Boolean valued analysis. Usually, the terminology cyclical compactness is employed. It is well known that stable compact sets are represented by compact sets in the Boolean valued model; see, e.g., [21,22], which amount to the equivalence ( 1 ) ( 2 ) below in the present context. In conditional set theory, it is used the terminology conditional compactness; see [3,4,20]. In particular, it was proven in ([23], Theorem 5.12) and ([16], Theorem 5.4.2) that, in the case that X = R ( d N ), a set is stably compact if and only it is the set of measurable selectors of a random compact set. All these known results amount to the following.
Proposition 8.
Suppose that S L 0 ( F ; X ) is stable. The following conditions are equivalent:
  • S is stably compact;
  • S L 0 ( F ; X ) is compact   = Ω ¯ .
If X = R d with d N , the conditions (1) and (2) are equivalent to
3.
There exists a random compact set X such that S = L 0 ( F ; X ) .
In that case, if L 0 ( F ; X ) = L 0 ( F ; Y ) , then X ( ω ) = Y ( ω ) for a.e. ω Ω .
The following result was obtained by Takeuti [15] in the case X = R . For the general case, we need to rely on Proposition 7.
Proposition 9.
u V ( F ) is a Borel subset of L 0 ( F ; X ) in the model V ( F ) if and only if there exists X B F ( X ) such that X = u = Ω ¯ .
Proof. 
Consider the collection
H : = { u V ¯ ( F ) : X B F ( X ) s . t . u = X = Ω ¯ } .
Notice that H V ¯ ( F ) is stable. Moreover, H is a collection of subsets of L 0 ( F ; X ) in the model V ¯ ( F ) . Then, due to Propositions 4 and 5, H is a σ -algebra in the model V ¯ ( F ) . Furthermore, due to Proposition 7, H contains all the open subsets of L 0 ( F ; X ) in the model V ¯ ( F ) . We conclude that, for every u which is a Borel set in the model V ¯ ( F ) , there exists X B F ( X ) such that u = X = Ω ¯ .
For the converse, consider the collection
H : = { M F B ( X ) : M · is a Borel set   = Ω ¯ }
where M · denotes the random Borel set ω M ω . If M = A × O with A F and O X open, we have that M · = L 0 ( F ; O ) A ¯ + A ¯ c , which is an open set in the model V ( F ) due to Proposition 7. Due to Propositions 4 and 5, H is a σ -algebra. It follows that H = F B ( X ) . The proof is complete. □
By noting that, if X B F ( X ) is a.s. nonempty, then L 0 ( F ; X ) = X = Ω ¯ , we can rewrite Proposition 9 in terms of sets of a.s. measurable selectors.
Corollary 1.
Suppose that S L 0 ( F ; X ) is stable. The following conditions are equivalent:
  • 1.There exists a random Borel set X such that S = L 0 ( F ; X ) ;
  • 2. S L 0 ( F ; X ) i s   a   B o r e l   s e t   = Ω ¯ .
In that case, if X, Y are random Borel sets such that S = L 0 ( F ; X ) = L ( F ; Y ) , then X ( ω ) = Y ( ω ) for a.e. ω Ω .

4.2. Boolean Valued Representation of Random Borel Functions

In the following, we connect random Borel functions and random continuous functions with Borel functions and continuous functions, respectively, in the model V ( F ) . Takeuti [15] established a similar connection between the so-called pseudo-Baire functions and ( R to R ) Baire functions in the model V ( F ) . Since the class of Baire functions equals the class of Borel functionals in the R to R case, the connections provided below extends the results in [15] to the case of a general Polish space X instead of R .
For F B F R ( X ) , we define F ˜ : L 0 ( F ; X ) L 0 ( F ; R ) , where for each ξ L 0 ( F ; X ) we set F ˜ ( ξ ) ( ω ) : = F ( ω , ξ ( ω ) ) for a.e. ω Ω . Since F ˜ : L 0 ( F ; X ) L 0 ( F ; R ) is stable, we can define F ˜ , which is a function from L 0 ( F ; X ) to R in the model V ( F ) .
Lemma 3.
Let F , G B F R ( X ) . If F ˜ ( ξ ) G ˜ ( ξ ) for all ξ L 0 ( F ; X ) , then F G . In particular, if F ˜ = G ˜ , then F = G .
Proof. 
Fix F 0 , G 0 representatives of F , G , respectively. Define
M : = ( ω , x ) Ω × X : F 0 ( ω , x ) > G 0 ( ω , x ) .
Due to (A) in Proposition 3, the projection π Ω ( M ) is an element of F . Then, P ( π Ω ( M ) ) = 0 . Otherwise, due to (B) in Proposition 3, we can find ξ L 0 ( F ; X ) such that F ˜ ( ξ ) ( ω ) > G ˜ ( ξ ) ( ω ) for a.e. ω P ( π Ω ( M ) ) , which contradicts our assumption. □
Proposition 10.
Suppose that F , G B F ( X ) , then the following conditions are equivalent:
 1. 
F G ;
 2. 
F ˜ ( ξ ) G ˜ ( ξ ) for every ξ L 0 ( F ; X ) ;
 3. 
( x ) F ˜ ( x ) G ˜ ( x ) = Ω ¯ .
Proof .
( 2 ) ( 3 ) follows by manipulation of Boolean truth values.
( 1 ) ( 2 ) is clear, and ( 2 ) ( 1 ) is Lemma 3. □
Due to Remark 3, any sequence ( F n ) B F R ( X ) corresponds to a sequence F · of functions in the model V ( F ) .
Proposition 11.
Suppose that F , F n B F ( X ) for every n N . The following conditions are equivalent:
 1. 
lim n F n = F ;
 2. 
lim n F ˜ n ( ξ ) = F ˜ ( ξ ) for every ξ L 0 ( F ; X ) ;
 3. 
( x ) lim n F ˜ n ( x ) = F ˜ ( x ) = Ω ¯ .
Proof .
( 2 ) ( 3 ) follows by manipulation of Boolean truth values.
( 1 ) ( 2 ) is clear.
( 2 ) ( 1 ) : Fix F 0 , G 0 representatives of F , G , respectively. Define
M : = ( ω , x ) Ω × X : lim sup n F n 0 ( ω , x ) F 0 ( ω , x ) .
Due to (A) in Proposition 3, the projection π Ω ( M ) is an element of F . Then, P ( π Ω ( M ) ) = 0 . Otherwise, due to (B) in Proposition 3, we can find ξ L 0 ( F ; X ) such that lim sup n F ˜ ( ξ ) ( ω ) F ˜ ( ξ ) ( ω ) for a.e. ω P ( π Ω ( M ) ) , which contradicts our assumption. The argumentation for the limit inferior is similar. □
The following lemma was proven in ([15], Proposition 2.4.1) in the case X = R . The general case follows by the same argument.
Lemma 4.
Suppose that ( F n ) B F R ( X ) . If ( x ) lim n F n ( x ) = u ( x ) = Ω ¯ , then there exists F B F R ( X ) such that lim n F n = F and F ˜ = u = Ω ¯ .
Proposition 12.
Let f : L 0 ( F ; X ) L 0 ( F ; R ) be a stable function. The following conditions are equivalent:
  • 1.There exists F B F R ( X ) such that f = F ˜ ;
  • 2. f is Borel measurable   = Ω ¯ .
In that case, if f = F ˜ = G ˜ for F , G B F R ( X ) , then F = G .
Proof. 
1 2 : Define
H : = F ˜ : F B F R ( X ) .
Notice that H V ¯ ( F ) is stable. Then, H is a collection of functions from L 0 ( F ; X ) to R in the model V ( F ) . We prove that H contains all the Borel measurable functions from L 0 ( F ; X ) to R in the model V ( F ) . Supposing that u V ¯ ( F ) is a characteristic function on a Borel subset of L 0 ( F ; X ) in the model V ( F ) , then u = 1 X = Ω ¯ for some X B F ( X ) , due to Proposition 9. In that case, u = F ˜ = Ω ¯ with F = 1 Gph ( X ) . In the model V ( F ) , H is closed under linear combinations and, due to Lemma 4, is also closed under limits. Therefore, H contains all the Borel measurable functions in the model V ( F ) .
2 1 : Define
H : = { F B F R ( X ) : F ˜ is Borel measurable   = Ω ¯ } .
We prove that B F R ( X ) H . If F = 1 M for some M F B ( X ) , then F ˜ = 1 ( M · ) = Ω ¯ , where M · : ω M ω . The collection H is closed under linear combinations. In addition, if F = lim n F n with ( F n ) H , then for any ξ L 0 ( F ; X )
lim n F n ˜ ( ξ ) = lim n F ˜ n ( ξ ) = ( F ˜ ( ξ ) ) = F ˜ ( ξ )
in the model V ( F ) . Then, F ˜ is limit of Borel function, hence it is Borel measurable in the model V ( F ) . □
The following result is proven word by word as in the case X = R ; see ([15], Theorem 2.3.2).
Proposition 13.
Let f : L 0 ( F ; X ) L 0 ( F ; R ) be a stable function. The following conditions are equivalent:
 1. 
There exists F C F R ( X ) such that f = F ˜ ;
 2. 
f is continuous   = Ω ¯ .
In that cases, if f = F ˜ = G ˜ for F , G C F R ( X ) , then F = G .

5. Boolean Valued Representation of Markov Kernels

Next, we recall the notion of Markov kernel, which is a fundamental object in probability theory.
Definition 6.
Let κ : Ω × B ( X ) [ 0 , 1 ] be a function.
  • κ is called an essential Markov kernel if:
    • κ ( · , E ) : Ω [ 0 , 1 ] is F -measurable for all E B ( X ) ;
    • κ ( ω , X ) = 1 , κ ( ω , ) = 0 for a.e. ω Ω ;
    • If ( E n ) B ( X ) and E i E j for i j , then κ ω , E n = κ ( ω , E n ) for a.e. ω Ω .
  • An essential Markov kernel κ is called a Markov kernel if κ ( ω , · ) : B ( X ) [ 0 , 1 ] is a probability measure for all ω Ω .
Lemma 5.
([24], Theorem 1) If μ is an essential Makov kernel, there exists a Markov kernel κ such that μ ( ω , E ) = κ ( ω , E ) for a.e. ω Ω , for all E B ( X ) .
Lemma 6.
Let κ : Ω × B ( X ) [ 0 , 1 ] be a Markov kernel and X B F ( X ) . If X 0 is a representative of X, then ω κ ( ω , X 0 ( ω ) ) is measurable.
Proof. 
Define
H : = M F B ( X ) : κ ( · , M · ) i s m e a s u r a b l e .
Above, M · denotes ω M ω . First, if M = A × E with A F and E B ( X ) , we have that κ ( · , M · ) = 1 A κ ( · , E ) , which is measurable. Hence, F × B ( X ) H . In particular, we have Ω × X H . We prove that H is a Dynkin system, i.e., H is closed under proper differences and under the unions of increasing sequences of sets. If L , M H with L M , we have that κ ( · , ( M L ) · ) = κ ( · , M · L · ) = κ ( · , M · ) κ ( · , L · ) is measurable. If ( M n ) is an increasing sequence of elements of H , then it follows that κ ( · , ( n M n ) · ) = lim n κ ( · , ( M n ) · ) is measurable. Finally, since F × B ( X ) is a π -system (i.e., closed under finite intersections), we conclude that H = F B ( X ) by Dynkin’s π - λ theorem; see, e.g., ([25], Theorem 1.6.2). □
Given a Markov kernel κ : Ω × B ( X ) [ 0 , 1 ] , we define
κ ˜ : B F ( X ) L 0 ( F ; [ 0 , 1 ] ) ,
where κ ˜ ( X ) ( ω ) = κ ( ω , X ( ω ) ) for a.e. ω Ω . Note that κ ˜ is well-defined due to Lemma 6. Then, the function
{ X : X B F ( X ) } R F , X κ ˜ ( X )
is stable. Notice that, due to Proposition 9, { X : X B F ( X ) } is the descent of the Borel σ -algebra on L 0 ( F ; X ) in the model V ( F ) . We can consider a name κ ˜ that satisfies κ ˜ : B ( L 0 ( F ; X ) ) [ 0 , 1 ] = Ω ¯ and
κ ˜ ( X ) = κ ˜ ( X ) = Ω ¯ f o r a l l X B F ( X ) .
Then, a manipulation of Boolean truth values bearing in mind Propositions 4 and 5 proves the following.
Proposition 14.
Let κ : Ω × B ( X ) [ 0 , 1 ] be a Markov kernel. Then, κ ˜ is a Borel probability measure in the model V ( F ) .
In a converse direction, we have the following.
Proposition 15.
Suppose that Q is a Borel probability measure in the model V ( F ) . Then, there exists a Markov kernel κ : Ω × B ( X ) [ 0 , 1 ] such that Q = κ ˜ = Ω ¯ . Moreover, if κ , τ are Markov kernels such that Q = κ ˜ = τ ˜ = Ω ¯ , then κ ( ω , · ) = τ ( ω , · ) for a.e ω Ω .
Proof. 
Given E B ( X ) , we denote again by E the (class of the) constant random set ω E . Take ξ E L 0 ( F ; X ) such that ξ E = Q ( E ) = Ω ¯ . By choosing a representative of ξ E for each E B ( X ) , we can define μ : Ω × B ( X ) [ 0 , 1 ] such that μ ( · , E ) is F -measurable and μ ( ω , E ) = ξ E ( ω ) for a.e. ω Ω . Then, μ is an essential Markov kernel. Due to Lemma 5, there exists a Markov kernel κ such that κ ( ω , E ) = μ ( ω , E ) for a.e. ω , for all E B ( X ) . Define
H : = { M F B ( X ) : k ˜ ( M · ) = Q ( M · ) = Ω ¯ } .
For M = A × E with A F and E B ( X ) , we have that κ ( ω , M ω ) = 1 A ( ω ) κ ( ω , E ) = 1 A ( ω ) μ ( ω , E ) = 1 A ( ω ) ξ E ( ω ) for a.e. ω . In the model V ( F ) , it holds that
κ ˜ ( M · ) = ξ E A ¯ + 0 A ¯ c = Q ( E ) A ¯ + Q ( ) A ¯ c = Q ( M · ) .
By Propositions 4 and 5, H is a σ -algebra, hence H = F B ( X ) .
Finally, suppose that κ , τ are Markov kernels with Q = κ ˜ = τ ˜ = Ω ¯ . If E B ( X ) , then, in the model V ( F ) , it holds
κ ˜ ( E ) = κ ˜ ( E ) = τ ˜ ( E ) = τ ˜ ( E ) .
Hence, κ ( ω , E ) = τ ( ω , E ) for a.e. ω , for every E B ( X ) . Define A : = { ω Ω : κ ( ω , · ) τ ( ω , · ) } . Since X is second countable, then there exists a countable π -system D : = { E 1 , E 2 , } with σ ( D ) = B ( X ) (it suffices to take the collection of finite intersections of a countable topological base). For each n N , define A n : = { ω : κ ( ω , E n ) τ ( ω , E n ) } . Then, A = n A n , by the Dynkin’s π - λ theorem. Hence, A F , and P ( A ) = 0 since P ( A n ) = 0 . Then, the assertion follows. □
Remark 5.
A related connection of the above reciprocal relations in Propositions 14 and 15 is given in ([1], Theorem 4.1) using the language of conditional sets.
Suppose that κ : Ω × B ( X ) [ 0 , 1 ] is a Markov kernel and F B F R ( X ) . If F ( ω , · ) is κ ( ω , · ) -integrable for a.e. ω Ω , we denote by F d κ the unique element η L 0 ( F ; R ) such that η ( ω ) = F ( ω , x ) κ ( ω , d x ) for a.e. ω Ω .
Proposition 16.
Let κ : Ω × B ( X ) [ 0 , 1 ] be a Markov kernel and F B F R ( X ) . Then, F ( ω , · ) is κ ( ω , · ) -integrable for a.e. ω Ω if and only if F ˜ i s κ ˜ i n t e g r a b l e = Ω ¯ . In that case,
F d κ = F ˜ d κ ˜ = Ω ¯ .
Proof. 
We prove (1) for F 0 . It is not difficult to show the equality for F = 1 M with M F B ( X ) . By linearity, (1) holds for F simple. Finally, for F B F R ( X ) arbitrary, take a sequence F 1 F 2 of simple functions with lim n F n = F . Then, in the model V ( F ) , by monotone convergence one has
F d κ = lim n F n d κ = lim n F ˜ n d κ ˜ = F ˜ d κ ˜ .
The proof is complete. □
Denote by P ( X ) the set of all Borel probability measures on the Polish space X . We endow P ( X ) with the Prokhorov metric, which is defined by
π ( P , Q ) : = inf ε > 0 : P ( C ) Q ( C ε ) + ε , Q ( C ) P ( C ε ) + ε , C X closed ,
where C ε : = { x X : ( y C ) d ( x , y ) < ε } . Recall that the metric π induces the topology of weak convergence of probability measures. Namely, a sequence ( Q n ) P ( X ) converges to Q if and only if lim n f d Q n = f d Q for every f C b ( X ) , where C b ( X ) denotes the set of all bounded continuous functional f : X R . In addition, the set P ( X ) , endowed with the Prokhorov metric, is a Polish space and the metric π is compatible with the weak topology σ ( P ( X ) , C b ( X ) ) . For further details, see [26,27]. Since P ( X ) is Polish, it is possible to apply the results studied in the previous section to any Boolean valued representation of L 0 ( F ; P ( X ) ) .
Markov kernels can be regarded as P ( X ) -valued random variables. Namely, we have the following.
Lemma 7.
([28], Theorem A.5.2) If κ : Ω × B ( X ) [ 0 , 1 ] is a Markov kernel, then there exists ν L 0 ( F ; P ( X ) ) such that ν ( ω ) = κ ( ω , · ) for a.e. ω Ω . If ν L 0 ( F ; P ( X ) ) , then there exists a Markov kernel κ : Ω × B ( X ) [ 0 , 1 ] such that ν ( ω ) = κ ( ω , · ) for a.e. ω Ω .
In virtue of Lemma 7, in the following, an element κ L 0 ( F ; P ( X ) ) is regarded as an equivalence class of Markov kernels.
Denote by P ( L 0 ( F ; X ) ) F a name for the Borel probability measures on L 0 ( F ; X ) , in the model V ( F ) . Proposition 14 and 15 tell us that
L 0 ( F ; P ( X ) ) P ( L 0 ( F ; X ) ) F , κ κ ˜
is a stable bijection. In other words, P ( L 0 ( F ; X ) ) F is a Boolean valued representation of L 0 ( F ; P ( X ) ) . In particular, we can apply the relations provided in the previous section to the Prokhorov metric π , and its randomized version π ˜ can be defined as
π ˜ ( κ 1 , κ 2 ) ( ω ) : = π ( κ 1 ( ω ) , κ 2 ( ω ) ) for a.e.   ω Ω ,
for every κ 1 , κ 2 L 0 ( F ; P ( X ) ) .

6. Boolean Valued Representation of Regular Conditional Probability Distributions

In the following, we assume that ( Ω , E , P ) is a probability space and F is a sub- σ -algebra of E that contains all the P -null sets.( If F does not contain all the null sets, we can complete E and F by considering the σ -algebras E * : = E P 1 ( 0 ) and F * : = F P 1 ( 0 ) .) Conditional expectations are commonly defined for random variables with finite expectation, but we can naturally extend this to a more general setting. Namely, for ξ L 0 ( E ; R ) with lim n E P [ | ξ | n | F ] < + we define the extended conditional expectation of ξ by
E P [ ξ | F ] : = lim n E P [ ξ + n | F ] lim n E P [ ξ n | F ] L 0 ( F ; R ) .
Next, we study a Boolean valued representation of L 0 ( E ; R ) in the model V ( F ) . Gordon [29] established that the conditional expectation is the Boolean valued interpretation of the usual expectation. Further relations can be found in ([16], Chapter 4). We start by briefly recalling how a probability space ( Ω , E , P ) can be made into a probability space in the model V ( F ) ; the details can be found in ([16], Chapter 4). This construction is valid for a general probability space even if it is not complete. Consider the measure algebra E : = E / P 1 ( 0 ) . Then, E is a stable F -set, where Δ ( E , F ) : = { A F : A E = A F } for E , F E (see e.g., ([30], Section 1.10)). Therefore, we can consider the bijection
E E , E E
of E into its Boolean valued representation E . In addition, it is shown that E is a complete Boolean algebra in the model V ( F ) . Furthermore, the conditional probability
P ( · | F ) : E L 0 ( F ; [ 0 , 1 ] ) , P ( E | F ) : = E [ 1 E | F ]
is a stable function and it can be shown that P ( · | F ) is a probability measure on E , in the model V ( F ) . By applying first the transfer principle to the Stone representation theorem for measure algebras ([31], 321J) and then the maximum principle, we can find members Ω ˜ , E ˜ , and P ˜ of V ¯ ( F ) such that:
  • ( Ω ˜ , E ˜ , P ˜ ) is probability space   = Ω ¯ , and
  • E = E ˜ / P ˜ 1 ( 0 ) and   P ( · | F ) = P ˜ / P ˜ 1 ( 0 ) = Ω ¯ .
Furthermore, let L 0 ( E ˜ ; R ) F be a name with L 0 ( E ˜ ; R ) F = L 0 ( E ˜ ; R ) = Ω ¯ . Then, as shown in ([16], Proposition 4.1.6), it is possible to find a bijection
ι E : L 0 ( E ; R ) L 0 ( E ˜ ; R ) F , ξ ξ
such that:
(S1)
ι E and ι F coincide on L 0 ( F ; R ) ;
(S2)
For every sequence ( ξ k ) L 0 ( E ; R ) and countable partition ( A k ) F of Ω ¯ , ( 1 A k ξ k ) = ξ k A k holds;
(S3)
ξ = η = { A F : 1 A ξ = 1 A η } ;
(S4)
ξ η = { A F : 1 A ξ 1 A η } ;
(S5)
ξ + η = ( ξ + η ) = Ω ¯ ;
(S6)
{ ξ = η } = { ξ = η } = Ω ¯ ;
(S7)
{ ξ η } = { ξ η } = Ω ¯ ;
(S8)
( 1 E ) = 1 E = Ω ¯ for all E E ;
(S9)
P ( ξ η | F ) = P ˜ ( ξ η ) = Ω ¯ for all η L 0 ( F ; R ) ;
(S10)
E P [ ξ | F ] = E P ˜ [ ξ ] = Ω ¯ for all ξ with E [ | ξ | | F ] < + .
Given X B F ( R ) and ξ L 0 ( E ; R ) , we define { ξ X } to be the class in E of the set { ω Ω : ξ 0 ( ω ) X 0 ( ω ) } , where ξ 0 and X 0 are arbitrary representatives of ξ and X, respectively. In addition, in the model V ( F ) , we can similarly consider { ξ X } for the random variable ξ and the Borel set X .
Proposition 17.
Suppose that ξ L 0 ( E ; R ) . Then, it holds
{ ξ X } = { ξ X } = Ω ¯
for all X B F ( R ) . In particular, if X is a.s. nonempty
{ ξ X } = { ξ L 0 ( F ; X ) } = Ω ¯ .
Proof. 
The collection
H : = { X : { ξ X } = { ξ X } }
is well-defined and stable. Then, H is a collection of real Borel sets in the model V ( F ) , due to Proposition 9. Moreover, in the model V ( F ) , H is a σ -algebra that contains the real intervals ( , r ] due to (S7). Therefore, H = B ( R ) = Ω ¯ , and the assertion follows. □
If ξ L 0 ( E ; R ) , it is well known that there exists a Markov kernel κ ξ | F : Ω × B ( R ) [ 0 , 1 ] such that
P ( ξ E | F ) ( ω ) = κ ξ | F ( ω , E ) for a.e.   ω Ω ,
for all E B ( R ) . (Actually, it is a consequence of Lemma 5.) The Markov kernel κ ξ | F above is called a regular conditional distribution of ξ given F ; see, e.g., [32]. The following result tells us that the conditional distribution of a real-valued random variable can be interpreted as the distribution of a real-valued random variable in the model V ( F ) .
Proposition 18.
Suppose that ξ L 0 ( E ; R ) . Then,
P ( ξ X | F ) = P ˜ ( ξ X ) = Ω ¯
holds for every X B F ( R ) . In particular, if X is a.s. nonempty
P ( ξ X | F ) = P ˜ ( ξ L 0 ( F ; X ) ) = Ω ¯
Proof. 
Suppose that κ ξ | F is a regular conditional distribution of ξ given F . We have that κ ˜ ξ | F ( X ) = P ( ξ X | F ) = Ω ¯ for all X B F ( R ) . In the model V ( F ) , the probability measure
P ˜ ( ξ · ) : B ( R ) [ 0 , 1 ] , E P ˜ ( ξ E )
agrees with κ ˜ ξ | F on real intervals ( , r ] due to (S9). Therefore, κ ˜ ξ | F = P ˜ ( ξ · ) in the model V ( F ) . □
A sequence ( ξ n ) in L 0 ( E ; R ) is said to be:
  • Conditionally independent if it is satisfied that
    P i = 1 n { ξ i x i } | F = i = 1 n P ( ξ i x i F
    for all N N and x 1 , , x n R .
  • Conditionally identically distributed if
    P ( ξ 1 x | F ) = P ( ξ n x | F )
    for all x R and n N .
Given a sequence ( ξ n ) L 0 ( E ; R ) , by stability, we can find a sequence ξ · of elements of L 0 ( E ˜ ) in the model V ( F ) .
Proposition 19.
([33], Proposition 4.6) Suppose that ( ξ n ) is a sequence in L 0 ( E ; R ) such that ess . sup n N | ξ n | < + . Then,
( lim sup n ξ n ) = lim sup n ξ n = Ω ¯ ( lim inf n ξ n ) = lim inf n ξ n = Ω ¯ .
In particular, lim n ξ n exists iff lim n ξ n e x i s t s = Ω ¯ . In that case, ( lim n ξ n ) = lim n ξ n = Ω ¯ .
A manipulation of Boolean truth values bearing in mind (S9) above shows the following result, which holds true even if the underlying probability space is not complete.
Proposition 20.
Suppose that ( ξ n ) is a sequence in L 0 ( E ; R ) . Then, the following properties hold:
 1. 
( ξ n ) is conditionally independent iff ( ξ n ) i s   i n d e p e n d e n t = Ω ¯ ;
 2. 
( ξ n ) is conditionally identically distributed iff ( ξ n ) i s   i d e n t i c a l l y   d i s t r i b u t e d = Ω ¯ .

7. A Transfer Principle for Large Deviations of Markov Kernels

As an application of the connections provided in the previous sections, we next develop a transfer principle that allows for the interpretation of results in large deviations theory as versions for sequences of Markov kernels. Let us first recall some basics of large deviations theory. For a thorough account, we refer to [11]. Suppose that I : X [ 0 , + ] is a rate function (i.e., a not identically + lower semicontinuous function). Let ( Q n ) be a sequence in P ( X ) .
  • Say that ( Q n ) satisfies the large deviation principle (LDP) with rate function I if
    inf x O I ( x ) lim inf n 1 n log Q n ( O ) for all nonempty   O X o p e n , inf x C I ( x ) lim sup n 1 n log Q n ( C ) for all nonempty   C X c l o s e d .
  • Say that ( Q n ) satisfies the Laplace principle (LP) with rate function I if
    lim n 1 n log e n f ( x ) Q n ( d x ) = sup x X { f ( x ) I ( x ) }
    for every bounded continuous function f : X R .
  • Say that ( Q n ) is exponentially tight if, for every n N , there exists a compact set K X such that
    lim n 1 n log Q n ( K c ) n .
Next, we introduce analogues for Markov kernels of the notions above.
Definition 7.
A function I : L 0 ( F ; X ) L 0 ( F ; [ 0 , + ] ) is called a conditional rate function, if the following holds:
  • 1.I is stable;
  • 2.there exists ξ L 0 ( F ; X ) such that I ( ξ ) < + ;
  • 3. I ( ξ ) lim inf n I ( ξ n ) whenever lim n d ˜ ( ξ n , ξ ) = 0 .
Definition 8.
Suppose that ( κ n ) is a sequence of Markov kernels κ n : Ω × B ( X ) [ 0 , 1 ] and I : L 0 ( F ; X ) L 0 ( F ; [ 0 , + ] ) is a conditional rate function.
 1. 
Say that ( κ n ) satisfies the conditional large deviation principle (cLDP) with conditional rate function I if:
ess . inf η L 0 ( F ; O ) I ( η ) lim inf n 1 n log κ ˜ n ( O )
for all a.s. nonempty random open set O B F ( X ) ,
ess . inf η L 0 ( F ; C ) I ( η ) lim sup n 1 n log κ ˜ n ( C )
for all a.s. nonempty random closed set C B F ( X ) .
 2. 
Say that ( κ n ) satisfies the conditional Laplace principle (cLP) with conditional rate function I if
lim n 1 n log e n F d κ n = ess . sup η L 0 ( F ; X ) { F ˜ ( η ) I ( η ) }
for all F C b , F R ( X ) .
 3. 
Say that ( κ n ) is conditionally exponentially tight if for every n N there exists K B F ( X ) a.s. nonempty such that L 0 ( F ; K ) is stably compact and
lim sup n 1 n log κ ˜ n ( K c ) n .
As a consequence of (R8) above, we have the following.
Lemma 8.
Suppose that I : L 0 ( F ; X ) L 0 ( F ; [ 0 , + ] ) is a stable function, then:
 1. 
For every a.s. nonempty X B F ( X ) , it holds that
ess . inf ξ L 0 ( F ; X ) I ( ξ ) = inf x L 0 ( F ; X ) I ( x ) = Ω ¯ ;
 2. 
for every F B F R ( X ) , it holds that
ess . sup ξ L 0 ( F ; X ) { F ˜ ( ξ ) I ( ξ ) } = sup x L 0 ( F ; X ) { F ˜ ( x ) I ( x ) } = Ω ¯ .
Given a conditional rate function I : L 0 ( F ; X ) L 0 ( F ; [ 0 , + ] ) , we can find a name I such that I : L 0 ( F ; X ) [ 0 , + ] = Ω ¯ . Moreover, I is a rate function in the model V ( F ) . Then, the Boolean valued representations provided in Section 4 and Section 5 give all the elements to show the following by means of a simple manipulation of Boolean truth values.
Theorem 4.
Suppose that ( κ n ) is a sequence of Markov kernels κ n : Ω × B ( X ) [ 0 , 1 ] and I : L 0 ( F ; X ) L 0 ( F ; [ 0 , + ] ) a conditional rate function. Then,
 1. 
( κ n ) satisfies the cLDP with conditional rate function I iff
( κ ˜ n ) satisfies   the   LDP   with   rate   function   I = Ω ¯ ;
 2. 
( κ n ) satisfies the cLP with conditional rate function I iff
( κ ˜ n ) satisfies   the   LDP   with   rate   function   I = Ω ¯ ;
 3. 
( κ n ) is conditionally exponentially tight iff ( κ ˜ n ) is   exponentially   tight   = Ω ¯ .

The Interpretation of Basic Theorems

By the transfer principle, if φ is a known theorem, then the assertion φ = Ω ¯ is also a theorem. This provides a technology for expanding the content of the already available theorems. In the following, we use this method to expand large deviations results on sequences of probability distributions to new large deviations results on sequences of Markov kernels.
We derive the following version of Varadhan’s large deviations theorem.
Theorem 5.
Let ( κ n ) be a sequence of Markov kernels κ n : Ω × B ( X ) [ 0 , 1 ] and I : L 0 ( F ; X ) L 0 ( F ; [ 0 , + ] ) a conditional rate function. If ( κ n ) satisfies the cLDP with conditional rate function I, then ( κ n ) satisfies the cLP with conditional rate function I.
Consider the classical Varadhan’s theorem; see, e.g., ([34], Theorem III.13, p. 32). Then, in view of Theorem 4, the statement above is a reformulation of Varadhan s theorem = Ω ¯ , which is also a theorem due to the transfer principle. Similarly, we have the following version of the Bryc’s large deviations theorem ([11], Theorem 4.4.2).
Theorem 6.
Let ( κ n ) be a sequence of Markov kernels κ n : Ω × B ( X ) [ 0 , 1 ] such that the limit
ϕ ( F ) : = lim n 1 n log e n F d κ
exists for all F C b , F R ( X ) . If ( κ n ) is conditionally exponentially tight, then ( κ n ) satisfies the cLDP and the cLP with conditional rate function I : L 0 ( F ; X ) L 0 ( F ; [ 0 , + ] ) ,
I ( η ) : = ess . sup F C b , F R ( X ) { F ˜ ( η ) ϕ ( F ) } .
Suppose now that, as in Section 6, ( Ω , E , P ) is a complete probability space and F is a sub- σ -algebra of E that contains all the null sets. If η n : = 1 n ( ξ 1 + + ξ n ) , n N is the sequence of sample means for some conditionally i.i.d. sequence ( ξ n ) L 0 ( E ; R ) , then, due to (S5) and Proposition 20, η · is the sequence of a sample means of an i.i.d. sequence ξ · in the model V ( F ) . Then, the following conditional version of Cramér’s large deviation theorem follows by a Boolean valued interpretation of its unconditional version ([11], Theorem 2.2.3) bearing in mind Proposition 18.
Theorem 7.
Let ( ξ n ) L 0 ( E ; R ) be conditionally i.i.d.. Suppose that κ n : Ω × B ( R ) [ 0 , 1 ] is a regular conditional distribution of 1 n i = 1 n ξ i given F , for each n N . Then, ( κ n ) satisfies the cLDP with a conditional rate function I : L 0 ( F ; R ) L 0 ( F ; [ 0 , + ] ) ,
I ( η ) : = ess . sup ς L 0 ( F ; R ) { η ς Λ ( ς ) } ,
where Λ ( ς ) : = log E [ exp ( ς ξ 1 ) | F ] .
Obviously, all these theorems are just some examples: we can state a version of any theorem φ on large deviations theory, and it immediately renders a version for Markov kernels of the form φ = Ω ¯ .
We finish this section by pointing out that several results in the literature involving sample means of conditional independent random variables can be easily proven by means of the transfer principle. For limitation of space, we focus on a few of them. Recall the strong law of large numbers, which asserts that the sequence of sample means of a i.i.d. sequence of random variables converges a.s. to the mean of their common distribution. As explained above, the sequence of sample means of a c.i.i.d. sequence of random variables are represented by the sequence of sample means of a i.i.d. sequence of random variables in the model V ( F ) . Then, bearing in mind Proposition 19, the law of large numbers in the model V ( F ) is interpreted as the following known conditional law of large numbers (see ([35], Proposition 2.3)), which holds true due to the transfer principle.
Theorem 8.
Suppose that ( ξ n ) L 0 ( E ; R ) is a conditionally i.i.d. sequence such that E P [ | ξ 1 | | F ] < + . Then, lim n 1 n i = 1 n ξ i = E P [ ξ 1 | F ] .
In addition, the conditional versions of the law of large numbers obtained in [36] (see Theorems 3.5 and 4.2) are also interpretations of their classical versions. The same applies to the main result in [37] (see Theorem 2.1), which is a conditional version of the Kolmogorov–Feller weak law of large numbers and follows by the Boolean valued interpretation of its unconditional version.

8. Conclusions

As shown in Section 4, Section 5 and Section 6, all the basic objects in random set theory have a natural Boolean valued representation in the model V ( F ) . Namely, random sets, random functions, Markov kernels, and regular conditional distributions are respectively represented by Borel sets, Borel functions, Borel probability measures, and probability distributions in the model V ( F ) . On the other hand, Boolean valued analysis provides a technology for expanding the content of the already available theorems. Namely, each known theorem involving Borel sets, Borel functions, Borel probability measures, and/or probability distributions automatically has a non-obvious random set analogue involving respectively random sets, random functions, Markov kernels, and/or regular conditional distributions. This is a powerful tool to formalize and prove results in random set theory, and potentially applicable to large deviations, stochastic optimization, and mathematical finance. This is illustrated in Section 7 with the new limits results for Markov kernels easily obtained by means of this method, namely, Theorems 5–7. Nevertheless, these results are just examples of applications, and the number of instances of applications can be easily increased.

Author Contributions

These authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

The first author was supported by project MTM2017-86182-P (Government of Spain, AEI/ERDF-FEDER, EU) and by project 20797/PI/18 by Fundación Séneca, ACyT Región de Murcia. The second author was partially supported by the grant Fundación Séneca 20903/PD/18.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jamneshan, A.; Kupper, M.; Streckfuß, M. Measures and integrals in conditional set theory. Set-Valued Var. Anal. 2018, 26, 947–973. [Google Scholar] [CrossRef] [Green Version]
  2. Molchanov, I. Theory of Random Sets. Probability and Its Applications, 2nd ed.; Springer: London, UK, 2017. [Google Scholar]
  3. Avilés, A.; Zapata, J.M. Boolean-valued models as a foundation for locally L0-convex analysis and Conditional set theory. J. Appl. Logics 2018, 5, 389–420. [Google Scholar]
  4. Drapeau, S.; Jamneshan, A.; Karliczek, M.; Kupper, M. The algebra of conditional sets, and the concepts of conditional topology and compactness. J. Math. Anal. Appl. 2016, 437, 561–589. [Google Scholar] [CrossRef] [Green Version]
  5. Kusraev, A.G.; Kutateladze, S.S. Boolean Valued Analysis. Mathematics and Its Applications; Springer: Dordrecht, The Netherlands, 2012. [Google Scholar]
  6. Haier, A.; Molchanov, I. Multivariate risk measures in the non-convex setting. Stat. Risk Model. 2019, 36, 25–35. [Google Scholar] [CrossRef] [Green Version]
  7. Lepinette, E. Random optimization on random sets. Math. Methods Oper. Res. 2020, 91, 159–173. [Google Scholar] [CrossRef] [Green Version]
  8. Lepinette, E.; Molchanov, I. Conditional cores and conditional convex hulls of random sets. J. Math. Anal. Appl. 2010, 478, 368–393. [Google Scholar] [CrossRef] [Green Version]
  9. Molchanov, I.; Cascos, I. Multivariate risk measures: A constructive approach based on selections. Math. Financ. 2016, 26, 867–900. [Google Scholar] [CrossRef] [Green Version]
  10. Molchanov, I.; Molinari, F. Random Sets in Econometrics; Cambridge University Press: Cambridge, UK, 2018; Volume 60. [Google Scholar]
  11. Dembo, A.; Zeitouni, O. Large Deviations Techniques and Applications; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  12. Varadhan, S.R.S. Large deviations for random walks in a random environment. Commun. Pure Appl. Math. A J. Issued Courant Inst. Math. Sci. 2003, 56, 1222–1245. [Google Scholar] [CrossRef]
  13. Guo, T. Relations between some basic results derived from two kinds of topologies for a random locally convex module. J. Funct. Anal. 2010, 258, 3024–3047. [Google Scholar] [CrossRef]
  14. Föllmer, H.; Schied, A. Stochastic Finance. An Introduction in Discrete Time. In De Gruyter Studies in Mathematics, 3rd ed.; Walter de Gruyter: Berlin, Germany; New York, NY, USA, 2011. [Google Scholar]
  15. Takeuti, G. Two Applications of Logic to Mathematics; Princeton University Press: Princeton, NJ, USA, 1979. [Google Scholar]
  16. Zapata, J.M. A Boolean-Valued Models Approach to L0-Convex Analysis, Conditional Risk and Stochastic Control. Ph.D. Thesis, Universidad de Murcia, Murcia, Spain, 2018. Available online: http://hdl.handle.net/10201/59422 (accessed on 1 August 2020).
  17. Gordon, E.I. Real numbers in Boolean-valued models of set theory, and K-spaces. In Doklady Akademii Nauk; Russian Academy of Sciences: Moscow, Russia, 1977; Volume 237, pp. 773–775. [Google Scholar]
  18. Kusraev, A.G.; Kutateladze, S.S. The Gordon theorem: Origins and meaning. Vladikavkaz Math. J. 2019, 21, 63–70. [Google Scholar]
  19. Kabanov, Y.; Safarian, M. Markets with Transaction Costs: Mathematical Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  20. Jamneshan, A.; Kupper, M.; Zapata, J.M. Parameter-dependent stochastic optimal control in finite discrete time. J. Optim. Theory Appl. 2020, 186, 644–666. [Google Scholar] [CrossRef]
  21. Kusraev, A.G. Dominated Operators; Kluwer Academic Publishers: Dordrecht, Switzerland, 2000. [Google Scholar]
  22. Kusraev, A.G. Vector Duality and Its Applications; Nauka: Novosibirsk, Russia, 1985. [Google Scholar]
  23. Jamneshan, A.; Zapata, J.M. On compactness in topological L0-modules. Preprint 2017.
  24. Radchenko, V.N. The Radon-Nikodym theorem for random measures. Ukr. Math. J. 1989, 41, 57–61. [Google Scholar] [CrossRef]
  25. Cohn, D. Measure Theory, 2nd ed.; Birkhäuser: Basel, Switzerland, 2013; Volume 5. [Google Scholar]
  26. Billingsley, P. Convergence of Probability Measures; John Wiley & Sons Inc.: New York, NY, USA, 1968. [Google Scholar]
  27. Parthasarathy, K.R. Probability Measures on Metric Spaces; American Mathematical Soc.: Providence, RI, USA, 2005; Volume 352. [Google Scholar]
  28. Dupuis, P.; Ellis, R.S. A Weak Convergence Approach to the Theory of Large Deviations; John Wiley & Sons: Hoboken, NJ, USA, 2011; Volume 902. [Google Scholar]
  29. Gordon, E.I. K-spaces in Boolean-valued models of set theory. Dokl. Akad. Nauk SSSR 1981, 258, 777–780. [Google Scholar]
  30. Kusraev, A.G.; Kutateladze, S.S. Boolean valued analysis: Selected topics. Vladikavkaz SMI VSC RAS 2014, 1000. [Google Scholar] [CrossRef]
  31. Fremlin, D.H. Measure Theory Volume 3: Measure Algebras; Torres Fremlin: Colchester, UK, 2002; Volume 25. [Google Scholar]
  32. Durrett, R. Probability: Theory and Examples, 2nd ed.; Duxbury Press: Belmont, CA, USA, 1996. [Google Scholar]
  33. Zapata, J.M. A Boolean valued analysis aproach to conditional risk. Vladikavkaz Math. J. 2019, 21, 71–89. [Google Scholar]
  34. Den Hollander, F. Large Deviations; American Mathematical Soc.: Providence, RI, USA, 2008; Volume 14. [Google Scholar]
  35. Hess, K.T. Conditional zero-one laws. Theory Probab. Its Appl. 2004, 48, 711–718. [Google Scholar] [CrossRef]
  36. Majerek, D.; Nowak, W.; Zieba, W. Conditional strong law of large number. Int. J. Pure Appl. Math 2005, 20, 143–156. [Google Scholar]
  37. Yuan, D.; Hu, X. A conditional version of the extended Kolmogorov–Feller weak law of large numbers. Stat. Probab. Lett. 2015, 97, 99–107. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Avilés López, A.; Zapata García, J.M. Boolean Valued Representation of Random Sets and Markov Kernels with Application to Large Deviations. Mathematics 2020, 8, 1848. https://doi.org/10.3390/math8101848

AMA Style

Avilés López A, Zapata García JM. Boolean Valued Representation of Random Sets and Markov Kernels with Application to Large Deviations. Mathematics. 2020; 8(10):1848. https://doi.org/10.3390/math8101848

Chicago/Turabian Style

Avilés López, Antonio, and José Miguel Zapata García. 2020. "Boolean Valued Representation of Random Sets and Markov Kernels with Application to Large Deviations" Mathematics 8, no. 10: 1848. https://doi.org/10.3390/math8101848

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop