Next Article in Journal
Some Identities on Degenerate Bernstein and Degenerate Euler Polynomials
Previous Article in Journal
The Aα-Spectral Radii of Graphs with Given Connectivity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Methods of Moment and Maximum Entropy for Solving Nonlinear Expectation

Department of Mathematics, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(1), 45; https://doi.org/10.3390/math7010045
Submission received: 4 December 2018 / Revised: 29 December 2018 / Accepted: 2 January 2019 / Published: 4 January 2019

Abstract

:
In this paper, we consider a special nonlinear expectation problem on the special parameter space and give a necessary and sufficient condition for the existence of the solution. Meanwhile, we generalize the necessary and sufficient condition to the two-dimensional moment problem. Moreover, we use the maximum entropy method to carry out a kind of concrete solution and analyze the convergence for the maximum entropy solution. Numerical experiments are presented to compute the maximum entropy density functions.

1. Introduction

The sublinear expectation E ^ introduced by Peng [1,2] can be regarded as the supremum of a family of linear expectations { E θ : θ Θ } , that is,
E ^ [ φ ( X ) ] = sup θ Θ E θ [ φ ( X ) ] ,
where φ ( x ) is a local Lipschitz continuous function and Θ is the parameter space.
It is evident that the sublinear expectation defined by (1) depends on the choice of parameter space Θ . Different spaces will result in different nonlinear expectations. In particular, let φ ( x ) = x n and:
μ ¯ n = E ^ [ X n ] μ ̲ n = E ^ [ X n ] , n = 1 , 2 , , N ,
then the parameter space Θ can be chosen as the following form:
Θ = [ μ ̲ 1 , μ ¯ 1 ] × [ μ ̲ 2 , μ ¯ 2 ] × × [ μ ̲ N , μ ¯ N ] .
When N = 1 , Peng [3] gave the definition of the independent and identically distributed random variable and proved the weak law of large numbers (LLN) under the sublinear expectation and Condition (2). Furthermore, if Θ = { 0 } × [ μ ̲ 2 , μ ¯ 2 ] , then Peng [4,5,6,7,8] defined the G-normal distribution and presented a new central limit theorem (CLT) under E ^ . The new LLN and CLT are the theoretical foundations in the framework of sublinear expectation.
The calculation of E ^ ( φ ( X ) ) can be performed by solving the following nonlinear partial differential equation:
u t 1 2 ( ( 2 u x 2 ) + σ 2 ( 2 u x 2 ) ) = 0 , ( t , x ) [ 0 , + ) × R , u ( 0 , x ) = φ ( x ) , x R ,
whose solution is u ( t , x ) = E ^ [ φ ( x + t X ) ] . When the initial value φ ( x ) is a convex function, Peng [8] gave the expression of E ^ [ φ ( X ) ] as follows:
E ^ [ φ ( X ) ] = 1 2 π μ ¯ 2 2 + φ ( x ) exp { x 2 2 μ ¯ 2 2 } d x .
If φ ( x ) is a concave function, the variance μ ¯ 2 2 in (5) will be replaced by μ ̲ 2 2 . For neither the concave nor the convex case, Hu [9] derived the explicit solutions of Problem (4) with the initial condition φ ( x ) = x n , n 1 . Gong [10] used the fully-implicit numerical scheme to compute the nonlinear probability under the G-expectation determined by the G-heat Equation (4) with the initial condition φ ( x ) = 1 { x < 0 } , x R . Here, 1 { x < 0 } denotes the indicator function of the set { x | x < 0 } . What all these methods mentioned above have in common is that G-expectation is calculated via solving the nonlinear partial differential Equation (4) with the particular initial condition φ ( x ) . However, because of the nonlinear term, it is not easy to find a solution of (4) with general continuous initial function φ ( x ) .
Based on the above reasons, in this paper, we consider a kind of special parameter space Θ defined in (3) and convert the sublinear expectation in (1) into the following two series of moment problems: find the probability density functions p ¯ ( x ) and p ̲ ( x ) such that:
μ ¯ n = + x n p ¯ ( x ) d x
and:
μ ̲ n = + x n p ̲ ( x ) d x ,
respectively, where n = 0 , 1 , , N and N = 2 M . That is, we will approximately find a special class of nonlinear expectations E ^ , which satisfies:
E ^ [ X n ] = sup θ Θ R x n p θ ( x ) d x = sup μ ̲ n θ n μ ¯ n R x n p θ ( x ) d x = μ ¯ n
and:
E ^ [ X n ] = sup θ Θ R ( x n ) p θ ( x ) d x = inf μ ̲ n θ n μ ¯ n R x n p θ ( x ) d x = μ ̲ n ,
where θ = ( θ 0 , θ 1 , , θ N ) .
The rest of this article is organized as follows. In Section 2, we present an alternative sufficient and necessary condition of the existence of solutions p ¯ ( x ) and p ̲ ( x ) that satisfy (6) and (7), respectively. In Section 3, we use the maximum entropy method to find the concrete solutions and analyze the convergence of maximum entropy solutions. In Section 4, we conduct the numerical simulations to calculate the maximum entropy density functions.

2. Existence of Solutions for Moment Problems

According to Theorems 1.35 and 1.36 in Akihito [11], the sequences { μ ¯ j } j = 0 N and { μ ̲ j } j = 0 N should satisfy some conditions if we use them to determine the probability density functions. Therefore, in this section, we consider the sufficient and necessary conditions to the existence of solutions for the moment Problems (6) and (7), respectively.
Let:
Δ ¯ 2 M = μ ¯ 0 μ ¯ 1 μ ¯ M μ ¯ 1 μ ¯ 2 μ ¯ M + 1 μ ¯ M μ ¯ M + 1 μ ¯ 2 M and Δ ̲ 2 M = μ ̲ 0 μ ̲ 1 μ ̲ M μ ̲ 1 μ ̲ 2 μ ̲ M + 1 μ ̲ M μ ̲ M + 1 μ ̲ 2 M
be the Hankel matrices of the given sequences { μ ¯ j } j = 0 2 M and { μ ̲ j } j = 0 2 M , respectively.
The following theorem gives an alternative sufficient and necessary condition for the existence of p ¯ ( x ) , which satisfies (6).
Theorem 1.
A sufficient and necessary condition that the sequence { μ ¯ j } j = 0 N determines the probability density function p ¯ ( x ) is that { μ ¯ j } j = 0 N satisfies one of the following two conditions:
(a) For any nonvanishing vector X m T = ( x 0 , x 1 , , x m ) ( 1 m M ) ,
μ ¯ 2 m X m 1 T Δ ¯ 2 ( m 1 ) X m 1 > ( U ¯ m T X m 1 ) 2 ;
(b) If there exist m 0 1 and nonvanishing vector X m 0 T = ( x 0 , x 1 , , x m 0 ) such that X m 0 T Δ ¯ 2 m 0 X m 0 = 0 , then:
| Δ ¯ 2 m | = 0
for m 0 m M , where U ¯ m T = ( μ ¯ m , , μ ¯ 2 m 1 ) .
Proof. 
We prove first the necessity. Let X be a continuous random variable with density function p ¯ ( x ) , whose original moments are:
μ ¯ m = E [ X m ] = R x m p ¯ ( x ) d x , m = 0 , 1 , , 2 M ,
where μ ¯ 0 = 1 .
For any m + 1 -dimensional nonvanishing vector X m = ( x 0 , x 1 , , x m ) T , we can check that:
X m T Δ ¯ 2 m X m = E [ ( j = 0 m x j X j ) 2 ] , m = 0 , 1 , , M .
In fact, taking m = 0 , we have:
X 0 T Δ ¯ 0 X 0 = x 0 2 = E [ x 0 2 ] ,
which means that (10) holds for m = 0 .
Assume that the Equation (10) holds for 0 m M 1 , that is,
X M 1 T Δ ¯ 2 ( M 1 ) X M 1 = E [ ( j = 0 M 1 x j X j ) 2 ] .
Then, we consider the case of m = M and note that the matrix Δ ¯ 2 M can be rewritten as:
Δ ¯ 2 M = Δ ¯ 2 ( M 1 ) U ¯ M U ¯ M T μ ¯ 2 M .
Multiply (13) by X M T and X M , respectively; it follows from (9) and (12) that:
X M T Δ ¯ 2 M X M = ( X M 1 T , x M ) Δ ¯ 2 ( M 1 ) U ¯ M U ¯ M T μ ¯ 2 M X M 1 x M = X M 1 T Δ ¯ 2 ( M 1 ) X M 1 + 2 x M U ¯ M T X M 1 + μ ¯ 2 M x M 2 = E [ ( j = 0 M 1 x j X j ) 2 ] + 2 x M j = 0 M 1 μ ¯ M + j x j + μ ¯ 2 M x M 2 = E [ ( j = 0 M 1 x j X j ) 2 ] + 2 x M j = 0 M 1 x j E [ X M + j ] + x M 2 E [ X 2 M ] = E [ ( j = 0 M x j X j ) 2 ] 0 .
By using mathematical induction, we know that (10) holds for m = 0 , 1 , , M . Moreover, Equation (10) means that the Hankel matrices are nonnegative definite.
Now, we prove the necessary condition. Because of the nonnegative definiteness of Δ ¯ 2 m ( 0 m M ) , there are two possible cases for X m T Δ ¯ 2 m X m , that is X m T Δ ¯ 2 m X m > 0 for any m-dimensional nonzero vector X m or X m 0 T Δ ¯ 2 m 0 X m 0 = 0 for some m 0 -dimensional nonzero vector X m 0 . Thus, we divide the proof into two cases.
Case 1: Let X m T Δ ¯ 2 m X m = E [ ( j = 0 m x j X j ) 2 ] > 0 for any nonzero vector X m ( 1 m M ) ; by (14), we have:
E [ ( j = 0 m x j X j ) 2 ] = X m 1 T Δ ¯ 2 ( m 1 ) X m 1 + 2 x m U ¯ m T X m 1 + μ ¯ 2 m x m 2 .
We regard (15) as a quadratic equation in one variable with respect to x m . Since the value of (15) is greater than zero, so its discriminant is less than zero, i.e.,
( U ¯ m T X m 1 ) 2 μ ¯ 2 m X m 1 T Δ ¯ 2 ( m 1 ) X m 1 < 0 .
This implies that (a) in Theorem 1 holds for m = 1 , 2 , , M .
Case 2: Let X m 0 1 T Δ ¯ 2 ( m 0 1 ) X m 0 1 = 0 for some nonvanishing vector X m 0 1 ( 2 m 0 M + 1 ) . For m > m 0 1 , let m = m 0 1 + k 0 , k 0 = 1 , 2 , , M m 0 + 1 . We choose an m × 1 vector X m T = ( X m 0 1 T , 0 , , 0 k 0 ) and multiply Δ ¯ 2 m by X m T and X m . Here, we consider only the case of k 0 = 1 as an example, and the same method can be used to analyze for 2 k 0 M m 0 + 1 . Then,
X m 0 T Δ ¯ 2 m 0 X m 0 = ( X m 0 1 T , 0 ) Δ ¯ 2 ( m 0 1 ) U ¯ m 0 U ¯ m 0 T μ ¯ 2 m 0 X m 0 1 0 = X m 0 1 T Δ ¯ 2 ( m 0 1 ) X m 0 1 .
Since Δ ¯ 2 m 0 is nonnegative definite and X m 0 1 T Δ ¯ 2 ( m 0 1 ) X m 0 1 = 0 , by (17), it follows that | Δ ¯ 2 m 0 | = 0 , that is (b) holds.
Next, we prove the sufficiency via its converse-negative proposition. In other words, we need to show that there is no probability density function p ( x ) such that the corresponding moments { μ ¯ j } j = 0 N satisfy: there exist a positive integer m 0 ( 2 m 0 M + 1 ) and an m 0 -dimensional nonzero vector X m 0 1 such that:
(i) μ ¯ 2 m 0 X m 0 1 T Δ ¯ 2 ( m 0 1 ) X m 0 1 ( U ¯ m 0 T X m 0 1 ) 2 ;
(ii) if X m 0 1 T Δ ¯ 2 ( m 0 1 ) X m 0 1 = 0 , then there exists a positive integer m 1 > m 0 1 such that | Δ ¯ 2 m 1 | > 0 .
Now, we carry out the proof by contradiction. Suppose such a density function p ¯ ( x ) exists. If X m 0 T Δ ¯ 2 m 0 X m 0 > 0 holds for any m 0 + 1 -dimensional nonvanishing vector, then it follows from (15) that:
( U ¯ m 0 T X m 0 1 ) 2 μ ¯ 2 m 0 X m 0 1 T Δ ¯ 2 ( m 0 1 ) X m 0 1 < 0 .
The above inequality contradicts Condition (i).
If the moments μ ¯ j satisfy Condition (ii), then there exists an m 0 -dimensional nonzero vector such that X m 0 1 T Δ ¯ 2 ( m 0 1 ) X m 0 1 = 0 . By Condition (ii), without loss of generality, we let m 1 = m 0 and repeat the same procedure for the derivation of (17) to get that:
0 < X m 0 T Δ ¯ 2 m 0 X m 0 = X m 0 1 T Δ ¯ 2 ( m 0 1 ) X m 0 1 = 0 ,
where X m 0 T = ( X m 0 1 T , 0 ) . Obviously, the relationship in (19) is contradictory.
Then, we complete the proof of necessity.  □
Remark 1.
We use μ ̲ m , Δ ̲ 2 m , and U ̲ m to replace μ ¯ 2 m , Δ ¯ 2 m , and U ¯ m , respectively, in Theorem 1, then we get the sufficient and necessary conditions to the existence of the probability density function p ̲ ( x ) .
Note that Theorem 1 is presented in one-dimensional random variable space. Now, we extend this theorem to two-dimensional case. Let X and Y be any continuous random variables with the joint probability density function p θ ( x , y ) . The two-dimensional moment problems are defined by: find the joint probability density function p ¯ ( x , y ) and p ̲ ( x , y ) such that:
μ ¯ i j = R R x i y j p ¯ ( x , y ) d x d y , i , j M
and:
μ ̲ i j = R R x i y j p ̲ ( x , y ) d x d y , i , j M ,
where the set:
M = { ( i , j ) | i , j = 0 , 1 , , 2 M and 0 i + j 2 M } .
Here, we still take the sequence { μ ¯ i j } i , j M for example and present the main results in the following theorem without proof. Let:
Γ ¯ 2 m + 1 : = μ ¯ 00 μ ¯ 10 μ ¯ m 0 μ ¯ 01 μ ¯ 02 μ ¯ 0 m μ ¯ 10 μ ¯ 20 μ ¯ m + 1 , 0 μ ¯ 11 μ ¯ 12 μ ¯ 1 m μ ¯ m 0 μ ¯ m + 1 , 0 μ ¯ 2 m , 0 μ ¯ m 1 μ ¯ m 2 μ ¯ m m μ ¯ 01 μ ¯ 11 μ ¯ m 1 μ ¯ 02 μ ¯ 02 μ ¯ 0 , m + 1 μ ¯ 0 m μ ¯ 1 m μ ¯ m m μ ¯ 0 , m + 1 μ ¯ 0 , m + 2 μ ¯ 0 , 2 m
be the Hankel matrix generated by the given sequence { μ ¯ i j } i , j M .
Theorem 2.
The sequence { μ ¯ i j } i , j M can determine the density functions p ¯ N ( x , y ) if and only if it satisfies one of the following conditions:
(a) For any m 1 -dimensional vectors X ¯ m 1 T = ( x 1 , , x m 1 ) , Y m 1 T = ( y 1 , , y m 1 ) and m-dimensional vectors X m T = ( x 0 , , x m 1 ) , Y ¯ m T = ( y 1 , y 2 , y m ) ( 1 m M ) ,
μ ¯ 2 m , 0 Z 2 m + 1 T Γ ¯ 2 m + 1 Z 2 m + 1 > ( U ¯ 0 · T X m + U ¯ m · T Y m 1 ) 2
and:
μ ¯ 0 , 2 m Z 2 m + 1 T Γ ¯ 2 m + 1 Z 2 m + 1 > ( U ¯ · m T X ¯ m 1 + U ¯ 0 · T Y ¯ m ) 2 ;
(b) If there exists m 0 1 such that Z 2 m 0 + 1 T Γ ¯ 2 m 0 + 1 Z 2 m 0 + 1 = 0 , then:
| Γ ¯ 2 l + 1 | = 0
for m 0 l M , where the vectors Z 2 m 1 T = ( X m T , x m , Y m 1 T , y m ) and U ¯ · 0 T = ( μ ¯ m 0 , , μ ¯ 2 m 1 , 0 ) , U ¯ 0 · T = ( μ ¯ 0 m , , μ ¯ 0 , 2 m 1 ) and U ¯ · m T = ( μ ¯ 1 m , , μ ¯ m 1 , m ) , U ¯ m · T = ( μ ¯ m 1 , , μ ¯ m , m 1 ) .

3. Maximum Entropy for Moment Problems

In Section 2, we discussed the existence of solutions of the moment problems. Now, we will use the maximum entropy method to get the solutions for Problems (6) and (7). Moreover, we will consider the convergence of maximum entropy density functions.
Given the first N + 1 moments μ ¯ 0 , μ ¯ 1 , , μ ¯ N , the core idea of the maximum entropy method is to find the probability density function p ¯ N ( x ) , such that:
μ ¯ j = R x j p ¯ N ( x ) d x , j = 0 , 1 , , N ,
where μ ¯ 0 = 1 .
The Lagrange operator can be defined as:
L ( p ¯ , λ ¯ 0 , , λ ¯ N ) = R p ¯ ( x ) ln p ¯ ( x ) d x + j = 0 N λ ¯ j ( R x j p ¯ ( x ) d x μ ¯ j ) ,
where λ ¯ j , j = 0 , 1 , , N are the Lagrange multipliers.
By the functional variation with respect to p ¯ ( x ) , we have:
p ¯ N ( x ) = exp { j = 0 N λ ¯ j x j } = max p ¯ { R p ¯ ( x ) ln p ¯ ( x ) d x }
and (23) holds. The values of λ ¯ j can be calculated by solving a system of N + 1 equations resulting from the moments conditions (23). Here, we take N = 2 as an example and work out the values of λ ¯ j ( j = 0 , 1 , 2 ) as:
λ ¯ 0 = μ ¯ 1 2 + ln 2 π ( μ ¯ 2 μ ¯ 1 2 ) , λ ¯ 1 = μ ¯ 1 μ ¯ 2 μ ¯ 1 2 , λ ¯ 2 = 1 2 ( μ ¯ 2 μ ¯ 1 2 ) .
So far, we have considered the existence of the solutions of the moment Problems (6) and (7) for all N 1 . According to Theorem 3.3.11 in Durrett [12], we know that the probability density function p ¯ ( x ) is also unique in the weak convergence (see (27)) as long as its moments { μ ¯ j } j = 0 + satisfy the conditions (a) and (b) in Theorem 1 and:
lim sup m + μ ¯ 2 m 1 / 2 m 2 m < .
By the analysis in Frontini [13], we have the weak convergence for the maximum entropy solution p ¯ N ( x ) as follows:
lim N + R p ¯ N ( x ) ln p ¯ N ( x ) d x = R p ¯ ( x ) ln p ¯ ( x ) d x .
Remark 2.
By an argument analogous to the one-dimensional case, we can use the maximum entropy method to solve a concrete joint probability density function p ¯ N ( x , y ) for the two-dimensional moment problem (20). The detailed process is shown in the next section and need not be repeated here.
By Theorem 2 and Theorem 14.20 in Schmüdgen [14], if the marginal moments { μ ¯ m · } 2 m = 0 + and { μ ¯ · m } 2 m = 0 + satisfy Conditions (a) and (b) in Theorem 2 and the following multivariate Carleman condition [14]:
m = 0 + μ ¯ 2 m , i 1 2 m = m = 0 + μ ¯ i , 2 m 1 2 m = +
for 0 i N , then there exists a unique joint probability density function p ¯ ( x , y ) satisfying (20).
The convergence rate for the maximum entropy density p ¯ N ( x , y ) has been analyzed in Frontini [13], and the results are analogous to (27).

4. Numerical Experiments

In this section, we conduct numerical experiments to calculate the two-dimensional maximum entropy density functions. In the numerical experiments, we use the maximum entropy method proposed in Section 3 to calculate the joint probability density functions for the weekly closing price of stock X and the weekly rate of return Y of Shanghai A shares.
By random sampling, we collect the data about the maximum (Table 1) and minimum (Table 2) values of the mixed sample moments of orders one and two for X and Y, respectively. That is,
The two-dimensional maximum entropy problem is defined as: find a probability density function p ¯ 2 ( x , y ) such that:
p ¯ 2 ( x , y ) = max p ¯ { R R p ¯ ( x , y ) ln p ¯ ( x , y ) d x d y }
and:
μ ¯ k j = R R x k y j p ¯ 2 ( x , y ) d x d y
for k , j = 0 , 1 , 2 and 0 k + j 2 .
Take notice of the formula as:
p ¯ 2 ( x , y ) = p ¯ 2 ( y | x ) p ¯ 2 ( x ) ,
where p ¯ 2 ( y | x ) denotes the maximum entropy conditional density function. From (30), we can obtain the joint density function p ¯ 2 ( y | x ) as long as we deduce p ¯ 2 ( y | x ) and p ¯ 2 ( x ) . Let:
μ ¯ 0 j ( x ) = R y j p ¯ 2 ( y | x ) d y , j = 0 , 1 , 2 ,
be the conditional moments of the random variable Y under X, which satisfy:
R x k μ ¯ 0 j p ¯ 2 ( x ) d x = μ ¯ k j
for k , j = 0 , 1 , 2 and 0 k + j 2 .
According to Table 1 and the definition (22), we have:
Γ ¯ 3 = 1 30.781 0.033 30.781 1509.5 1.0158 0.033 1.0158 0.0086 and Δ ¯ 2 · = 1 30.781 30.781 1509.5 .
It is easy to verify that Γ ¯ 3 and Δ ¯ 2 · are positive definite. Hence, by Theorem 1, there exists a density function p ¯ 2 ( x ) determined by { μ ¯ i 0 , i = 0 , 1 } .
To derive the explicit expressions of p ¯ 2 ( x , y ) , we need to construct the conditional moments { μ ¯ 0 j ( x ) } j = 0 2 on the base of (32). Without loss of generality, we suppose μ ¯ 01 ( x ) is a constant. Combining (32) yields that:
μ ¯ 00 ( x ) = 1 , μ ¯ 01 ( x ) = μ ¯ 01 , μ ¯ 02 ( x ) = e δ ¯ x 2 + μ ¯ 01 2 , δ ¯ = 40.7624 .
By calculation, we derive the expressions of p ¯ 2 ( x , y ) as follows:
p ¯ 2 ( x , y ) = 1 2 π exp { 1 λ ¯ 0 λ ¯ 1 x λ ¯ 2 x 2 1 2 μ ¯ 01 2 e δ ¯ x 2 + μ ¯ 01 y e δ ¯ x 2 1 2 y 2 e δ ¯ x 2 } ,
where λ ¯ 0 = 3.9276 , λ ¯ 1 = 0.0548 , and λ ¯ 2 = 0.0009 .
In the same way, with the data in Table 2, we can obtain the following probability density function p ̲ 2 ( x , y ) determined by { μ ̲ k , j , k , j = 0 , 1 , 2 and 0 k + j 2 } .
p ̲ 2 ( x , y ) = 1 2 π exp { 1 λ ̲ 0 λ ̲ 1 x λ ̲ 2 x 2 1 2 μ ̲ 01 2 e δ ̲ x 2 + μ ̲ 01 y e δ ̲ x 2 1 2 y 2 e δ ̲ x 2 } ,
where λ ̲ 0 = 3.1240 , λ ̲ 1 = 0.1034 , λ ̲ 2 = 0.0036 , and δ ̲ = 773.7203 .
Figure 1 and Figure 2 show visually the sketches of the marginal entropy density functions p ¯ 2 ( x ) , p ̲ 2 ( x ) (derived in (25) with N = 2 ) and the joint entropy density functions p ¯ 2 ( x , y ) , p ̲ 2 ( x , y ) (derived in (35) and (36)), respectively.

Author Contributions

Conceptualization, L.G. and D.H.; methodology, D.H. and L.G.; software, L.G.; validation, L.G.; formal analysis, L.G.; investigation, L.G.; resources, L.G.; data curation, L.G.; writing—original draft preparation, L.G.; writing—review and editing, L.G.; visualization, L.G.; supervision, L.G. and D.H.; project administration, L.G.; funding acquisition, D.H.

Funding

This work is supported by National Natural Science Foundations of China [grant number 11531001] and National Program on Key Basic Research Project [grant number 2015CB856004].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peng, S.G. Filtration consistent nonlinear expectations and evaluations of contingent claims. Acta Math. Appl. Sin. 2004, 20, 191–214. [Google Scholar] [CrossRef]
  2. Peng, S.G. Nonlinear expectations and nonlinear Markov chains. Chin. Ann. Math. Ser. B 2005, 26, 159–184. [Google Scholar] [CrossRef]
  3. Peng, S.G. Survey on normal distributions, central limit theorem, Brownian motion and the related stochastic calculus under sublinear expectations. Sci. China 2009, 52, 1391–1411. [Google Scholar] [CrossRef]
  4. Peng, S.G. G-Expectation, G-Brownian motion and related stochastic calculus of Ito^ type. Stoch. Anal. Appl. 2007, 2, 541–567. [Google Scholar]
  5. Peng, S.G. Law of large numbers and central limit theorem under nonlinear expectations. arXiv, 2007; arXiv:math/0702358. [Google Scholar]
  6. Peng, S.G. G-Brownian motion and dynamic risk measure under volatility uncertainty. arXiv, 2007; arXiv:0711.2834. [Google Scholar]
  7. Peng, S.G. A new central limit theorem under sublinear expectations. Mathematics 2008, 53, 1989–1994. [Google Scholar]
  8. Peng, S.G. Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stoch. Process. Their Appl. 2006, 118, 2223–2253. [Google Scholar] [CrossRef]
  9. Hu, M.S. Explicit solutions of G-heat equation with a class of initial conditions by G-Brownian motion. Nonlinear Anal. Theory Methods Appl. 2012, 75, 6588–6595. [Google Scholar] [CrossRef]
  10. Gong, X.S.; Yang, S.Z. The application of G-heat equation and numerical properties. arXiv, 2013; arXiv:1304.1599v2. [Google Scholar]
  11. Akihito, H.; Nobuaki, O. Quantum Probability and Spectral Analysis of Graphs; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  12. Durrett, R. Probability: Theory and Examples; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  13. Frontini, M.; Tagliani, A. Entropy convergence in Stieltjes and Hamburger moment problem. Appl. Math. Comput. 1997, 88, 39–51. [Google Scholar] [CrossRef]
  14. Schmu¨dgen, K. The Moment Problem; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
Figure 1. The figures above show the figures of the maximum entropy marginal density p ¯ 2 ( x ) (left) and joint density p ¯ 2 ( x , y ) (right) determined by { μ ¯ j 0 } j = 0 2 and { μ ¯ i j } i , j M 2 , respectively.
Figure 1. The figures above show the figures of the maximum entropy marginal density p ¯ 2 ( x ) (left) and joint density p ¯ 2 ( x , y ) (right) determined by { μ ¯ j 0 } j = 0 2 and { μ ¯ i j } i , j M 2 , respectively.
Mathematics 07 00045 g001
Figure 2. The figures above show the figures of the maximum entropy marginal density p ̲ 2 ( x ) (left) and joint density p ̲ 2 ( x , y ) (right) determined by { μ ̲ j 0 } j = 0 2 and { μ ̲ i j } i , j M 2 , respectively.
Figure 2. The figures above show the figures of the maximum entropy marginal density p ̲ 2 ( x ) (left) and joint density p ̲ 2 ( x , y ) (right) determined by { μ ̲ j 0 } j = 0 2 and { μ ̲ i j } i , j M 2 , respectively.
Mathematics 07 00045 g002
Table 1. The values for the maximum value of the moments.
Table 1. The values for the maximum value of the moments.
Moments μ ¯ 00 μ ¯ 10 μ ¯ 20 μ ¯ 11 μ ¯ 01 μ ¯ 02
Values130.7811.5095 × 1031.01580.0330.0086
Table 2. The values for the minimum value of the moments.
Table 2. The values for the minimum value of the moments.
Moments μ ̲ 00 μ ̲ 10 μ ̲ 20 μ ̲ 11 μ ̲ 01 μ ̲ 02
Values114.311343.1627−1.2451−0.0870.0031

Share and Cite

MDPI and ACS Style

Gao, L.; Han, D. Methods of Moment and Maximum Entropy for Solving Nonlinear Expectation. Mathematics 2019, 7, 45. https://doi.org/10.3390/math7010045

AMA Style

Gao L, Han D. Methods of Moment and Maximum Entropy for Solving Nonlinear Expectation. Mathematics. 2019; 7(1):45. https://doi.org/10.3390/math7010045

Chicago/Turabian Style

Gao, Lei, and Dong Han. 2019. "Methods of Moment and Maximum Entropy for Solving Nonlinear Expectation" Mathematics 7, no. 1: 45. https://doi.org/10.3390/math7010045

APA Style

Gao, L., & Han, D. (2019). Methods of Moment and Maximum Entropy for Solving Nonlinear Expectation. Mathematics, 7(1), 45. https://doi.org/10.3390/math7010045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop