Next Article in Journal
Pricing a Defaultable Zero-Coupon Bond under Imperfect Information and Regime Switching
Previous Article in Journal
Differential Equations and Applications to COVID-19
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Motivation to Run in One-Day Cricket

by
Paramahansa Pramanik
1,* and
Alan M. Polansky
2
1
Department of Mathematics and Statistics, University of South Alabama, Mobile, AL 36688, USA
2
Department of Statistics and Actuarial Science, Northern Illinois University, DeKalb, IL 60115, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(17), 2739; https://doi.org/10.3390/math12172739
Submission received: 23 July 2024 / Revised: 29 August 2024 / Accepted: 30 August 2024 / Published: 2 September 2024
(This article belongs to the Special Issue Advances in Probability Theory and Stochastic Analysis)

Abstract

:
This paper presents a novel approach to identify an optimal coefficient for evaluating a player’s batting average, strike rate, and bowling average, aimed at achieving an optimal team score through dynamic modeling using a path integral method. Additionally, it introduces a new model for run dynamics, represented as a stochastic differential equation, which factors in the average weather conditions at the cricket ground, the specific weather conditions on the match day (including abrupt changes that may halt the game), total attendance, and home field advantage. An analysis of real data is been performed to validate the theoretical results.
MSC:
Primary 60H05; Secondary 81Q30

1. Introduction

In this paper, we determine an optimal control coefficient for a team under a one-day match with and without interruption. A Feynman-type path integral has been introduced which is different from the analytic path integral control approach. Under the analytic approach, a Cauchy problem is considered and optimal control is obtained by a conditional expectation with respect to the state variable. The first problem with this method is one must go through the computation of a very difficult value function. Second, the value function must be integrable everywhere in the Hilbert space, which is very difficult to observe before a process starts. Our approach depends on a stochastic Lagrangian and a continuously differentiable Itô process, which replaces the functional form of a run dynamics derived by a stochastic differential equation. A Wick-rotated Schrödinger-type equation has been constructed to have a better understanding of the run dynamics [1]. Since a Wick-rotated Schrödinger-type equation is equivalent to a forward Fokker–Plank equation, our approach can be compared with the mean-field game approach discovered by J.M. Lasry and Pierre Lions (2007) [2]. Throughout this paper, we treat run dynamics as a state variable and the variable that represents the aggressiveness of a player is the control variable. Since in every game a player uses either an offensive or defensive approach, our method can be used in any sport. Although there are several studies on sports modeling in the literature, according to the authors’ knowledge, none of them use stochastic control theory. Furthermore, the Feynman-type path integral approach can be used in any other stochastic control problem in economics, finance, and biology. To test our model, we perform an analysis on real data.
In the recent past, there has been a growing interest in developing mathematical models for cricket. The rise in popularity of T20 cricket has boosted the sport’s appeal in the Asian subcontinent, Australia, England, New Zealand, and the West Indies. Consequently, a more advanced model is needed to predict a team’s score in case a match is interrupted by rain. Methods like the average run rate, most productive overs, discounted most productive overs, parabola, Clark curves, Duckworth–Lewis method, and the modified Duckworth–Lewis method have been used to set up a target score if a match is stopped because of rain [3]. Dynamic programming methods have been used by [4,5].
In this paper, we consider modeling the discounted score of a player for the last 10 matches using hyperbolic discounting, giving the highest weight to the last match and the least weight corresponding to the match 10 matches before the current game. This is motivated by the reasoning that a player’s expected performance in the current game will depend more on his recent past performance instead of his far past performance. Then, we define an objective function based on the performance of 11 players in a team subject to stochastic differential run dynamics with finite drift and diffusion components. We construct an optimal strategy for a player to score in a game under uncertainties such as the strategies of the opposition players and the environment under a Liouville-like quantum gravity strategy space. To the authors’ knowledge, this is the first time that this type of stochastic differential game theoretic approach has been applied in the game of cricket. The dynamic approach in cricket was first introduced by Clarke et al. (1998) [6]. They developed a stochastic control model for optimizing batting strategies in one-day cricket. The model aims to maximize the expected number of runs scored by deciding whether to take risks or play conservatively based on the match situation, number of overs remaining, and the number of wickets in hand. Preston et al. (2000) [7] use dynamic programming to optimize cricket strategies in one-day matches. The authors focus on decision-making at various stages of the game, particularly on determining optimal strategies for different match conditions. Scarf et al. (2005) [8] present stochastic models to evaluate different strategies in one-day cricket. They use statistical techniques to model uncertainty and apply these models to assess the impact of various decisions on match outcomes. Norton et al. (2008) [9] use a dynamic programming approach to analyze the strategies in a one-day cricket match. The study focuses on both the first and second innings, examining how a team’s strategy can adapt to the type of ball bowled and the match situation. For the team batting first, the goal is to maximize the expected score. For the team batting second, the strategy is aimed at maximizing the probability of outscoring the first team, which translates to maximizing the chances of winning the match. The dynamic programming model allows for real-time adjustments to the scoring strategy based on the conditions, offering estimates for the optimal number of runs and the probability of winning at different stages of the game. The study also considers the effects of tail-end batsmen and the role of a “fifth bowler” on the overall strategy. Through simulation, the authors estimate the variance in total score when following the optimal strategy in the first innings. None of these papers consider a stochastic differential game theoretic approach under Liouville quantum gravity strategy space.
Based on Hebbian learning humans can be considered automatons, in the sense that when a person sees certain objects their outer neurons expand or contract in a certain unique way and send signals to the inner neurons through a synaptic system [10]. As outer neurons send electrons to inner neurons, they can have infinite possible paths in the synaptic system, and hence, mimic [11] the path integral method, and by the Riemann–Lebesgue lemma the path integral is measurable [12]. Once the signals come to the inner neurons, the person can observe the object and is able to make decisions about it. Therefore, the decisions of a player are the realization of a stochastic process. The Feynman path integral method provides an easier solution comapred to going through the difficult Hamiltonian–Jacobi–Bellman (HJB) equation [13,14].
Feynman path integral control employs the quantum Lagrangian function, where Schrödinger quantization utilizes the quantum Hamiltonian approach [15]. Path integral control offers a different perspective from Schrödinger’s approach, making it immensely important across fields like quantum physics, engineering, biophysics, economics, and finance [16,17]. Despite the belief in the equivalence of these two methods, mathematical complexities have prevented a complete proof of this equivalence [15]. Grid-based partial differential equation (PDE) solvers face exponentially increasing complexity and memory demands with higher system dimensions, rendering them impractical for high-dimensional cases. A Monte Carlo scheme offers an alternative, forming the core concept of path integral control [16,18,19]. This method addresses certain stochastic control problems using a Monte Carlo approach for solving HJB equations [17]. While incorporating randomness into the HJB equation is relatively straightforward, challenges arise from the dimensionality involved in calculating numerical solutions for both deterministic and stochastic HJB equations [10]. The HJB equation assumes that the feasible set of actions is limited to state and control variables only. This assumption does not hold for many economic problems that involve forward-looking constraints, where future actions are included in the feasible set [20]. With forward-looking constraints, the optimal plan does not adhere to Pontryagin’s maximum principle, and the standard form of the solution is not applicable, as choosing an action implies a commitment to a future action. The lack of a standard recursive formulation [21] makes it difficult to solve dynamic control problems with high dimensions [22] and prevents one from obtaining a numerical solution for the system [17].
The following is the construction of the paper. Section 2.1 describes the construction of a player’s optimal strategy in the absence of a rain interruption. Section 2.2 discusses different derivations of the SDE obtained from Section 2.1. Section 3 introduces a Henstock–Kurzweil–Feynman-type path integral to obtain a functional form of a player’s optimal coefficient of control in the presence of a rain interruption. An analysis of real data is performed in Section 4. Finally, Section 5 provides a brief discussion.

2. Construction of the Problem

2.1. The Game and Its Probabilistic Foundation

Before going into mathematical details, let us introduce the game. A cricket field is an open, circular expanse of closely cropped turf, completely encircled by a rope boundary line. Although there is no official size of a field, the ones used in professional cricket are seldom less than 150 yards in diameter, and some as wide as 200 yards or more [23]. The only noticeable objects on the field are two sets of three short, wooden poles, standing upright, facing each other, in the center of the playing area, 22 yards apart, the so-called wickets. Each of the wickets is 28 inches high, 9 inches wide, and holds two small, cylindrical 4.38 -inch pieces of molded wood, called bails. Four feet in front, and running parallel to each wicket, is a 12-foot chalk line called the popping crease. Another name for this is the batting crease; it is, in effect, the batsman’s safe line. Anytime a batsman is completely over this line or away from the wicket they can be run out. On the other hand, if the batsman has some part of their body behind the line, they are safe. Running back towards the wicket at right angles to the popping crease, eight feet and eight inches apart, are two eight-foot chalk lines called the return creases. They also demarcate the operating area for a bowler. If during a delivery a bowler steps beyond the popping crease, it is considered a no-ball. On the other hand, if a bowler delivers a ball outside of the return creases, it is a wide ball. For each no-ball or wide ball, the batting team receives an extra run, and the bowler has to deliver an extra ball in their over.
The most distinctive equipment of this game is a large paddle-shaped bat. It is made of willow, does not have any weight restrictions, but cannot be longer than thirty-eight inches or wider than 4.25 inches [23]. If a batsman is a stroke player (i.e., plays on timing of the ball without giving extra effort to hit the ball), a heavier bat is preferable, while a hard hitter or pinch hitter prefers a lighter bat for better maneuverability of the ball. A cricket ball cannot be heavier than five and a half ounces or larger in circumference than 8.81 inches. Due to their easy visibility at night, white balls are used in one-day matches. There are three main ways a batsman can score runs. First, if a batsman decides to run after hitting the ball, the two batsmen know that both of them have to run as far as the opposite wicket (i.e., their respective popping creases). They can continue this process as often as each of them feels that they can reach the popping crease safely. Second, a batsman receives six runs every time they hit a fly ball over the boundary line; and third, four runs every time they hit a ground ball over the boundary. In order to score runs, a batsman has to understand the condition of the pitch as well as the strategies used by bowlers. Therefore, stochastic game theoretic modeling is necessary.
Before going into the probabilistic foundation in the cricket environment, let us introduce it first. The term probabilistic foundation typically refers to the underlying probabilistic methods and principles that form the basis for a particular field or theory. In the context of complexity science, which studies complex systems and emergent behavior in systems composed of many interacting components, probabilistic methods are indeed an important part of the theoretical foundation. Complex systems often exhibit (such as in our case) stochastic behavior due to the large number of interacting components, environmental influences, or inherent randomness. Probabilistic models are used to describe these systems, where the exact behavior may not be predictable, but statistical properties can be understood. Complexity science often deals with how large-scale patterns emerge from simple rules applied locally by individual components of a system. These emergent phenomena can be studied using probabilistic methods, as the outcomes are often non-deterministic. Many complex systems are modeled using dynamical systems that include stochastic elements. For example, Markov processes, stochastic differential equations, and random walks are probabilistic tools that are essential in understanding the behavior of complex systems over time. Complexity science has roots in statistical mechanics, which is inherently probabilistic. It studies how macroscopic properties of systems arise from the microscopic interactions of components, often using probabilistic techniques. Given the intrinsic unpredictability and the emergent nature of phenomena in complex systems, a probabilistic foundation is not just expected but is often essential in complexity science. It allows researchers to develop models that can predict the distribution of possible outcomes rather than precise outcomes, to understand how local interactions lead to global phenomena, and to explore how small changes at the micro-level can lead to significant changes at the macro-level, often in a probabilistic manner. Since, in a one-day cricket match uncertainty occurs due to players’ strategies and the surrounding environment, we employ a probabilistic foundation in this paper.
There are 11 players in each team; let Z be the I × M run matrix of all the players in each team. Player i’s score is given by
u i ( Z ) = m = 1 M exp ( ρ i m ) Z i m ( u , w ) ,
where player i’s discount factor is ρ i ( 0 , 1 ] ; runs (i.e., state variable) Z i m ( u , w ) 0 is player i’s runs before match M + 1 (the current match), and is a function of the u t h over and total wickets lost w [ 0 , 9 ] ; M is the total number of matches played before match M + 1 . From the previous section we assume M = 10 . In Equation (1), u i ( Z ) is the score of player i, which is a monotonically increasing function of run Z . The right-hand side of Equation (1) is the discounted sum of runs of player i in the last M matches, and u and w are the remaining overs and wickets of the last 10 matches, respectively. To construct the discounting factor we consider the fact that humans are myopic in nature in order to make decisions. If player i has scored more runs in the last 5 matches, the score u i ( Z ) is higher compared to another player who scored more runs in the earlier 5 matches. On the other hand, if a star player fails to perform well in the last 10 matches, the player should be dropped. Another important assumption of this model is that the discount factor ρ i is constant only for player i. That is, for i j we assume that ρ i ρ j . The expected runs of team T before match M + 1 starts is
Z T = E M i = 1 I m = 1 M β i W i ( u ) exp ( ρ i m ) Z i m ( u , w ) ,
where the control variable W i ( u ) W takes the value from R I , which is a measure of the strategy to play attacking or defensively based on the valuation of the i t h player in terms of reputation (such as higher batting average, strike rate for a batsman, and lower bowling average for a bowler); I = 11 ; β i is the coefficient of W i ( u ) ; and E M = E { . | Z M } is the overall conditional expectation on Z until match M. If player i has a batting average more than 50 and the strike rate is greater than 85, then the measure W i ( u ) takes on a very large finite value, since player i is going to be offensive and is expected to score more. If team T loses a couple of early wickets with a very low score, it sends a batsman in who can stay on the wicket longer with a low strike rate (i.e., W i ( u ) 0 ) rather than a hard hitter or a pinch hitter. Finally, we assume Z T 0 is a C 1 , 2 function with respect to Z i m and W i . In this paper, our objective is to maximize Equation (2).
Assume the run dynamics of a match follow a stochastic differential equation (SDE):
d Z ( u , w ) = μ ^ [ u , W ( u ) , Z ( u , w ) ] d u + σ [ u , σ 2 * , W ( u ) , Z ( u , w ) ] d B ( u ) ,
where W I × 1 ( u ) = [ W 1 ( u ) W 2 ( u ) W I ( u ) ] T W R I is the player control space; Z Z R I × M is the run space under the one-day cricket rules; B M × M ( u ) is an M × M -dimensional matrix of Brownian motion defined on a probability space ( Ω , F , P ) , where Ω is the sample space, F is the σ -algebra generated by F u Z -Brownian motion, and P is the probability measure such that F u Z is the natural filtration of B M × M ( u ) ; μ ^ I × M > 0 is the drift coefficient; and the positive semi-definite matrix σ I × M 0 is the diffusion coefficient so that
σ [ u , σ 2 * , W ( u ) , Z ( u , w ) ] = σ 1 [ u , W ( u ) , Z ( u , w ) ] + σ 2 * .
It is important to note that σ 1 in Equation (4) comes from the weather conditions of the venue of the game, percentage attendance of the home crowd, and type of match (i.e., day or day–night match); and σ 2 comes from the behavior of the bowler of the opposition team, coming from the fractal dimensional strategy space, and is measured by the square root of the product of the complex characteristic function Φ g ^ U ( θ ) with its conjugate Φ ¯ g ^ U ( θ ) , which is discussed in Lemma 3. We assume the SDE expressed in Equation (1) has linear drift with respect to runs Z and the control W .
Assumption 1.
For U > 0 , let μ ^ ( u , W , Z ) : [ 0 , U ] × R I × R I × M R I × M and σ ( u , σ 2 , W , Z ) : [ 0 , U ] × S I × U × R I × R I × M R I × M be some measurable function with I × U -dimensional two-sphere S I × U and, for some positive constant K 1 , W R I , and Z R I × M we have linear growth as
| μ ^ ( u , W , Z ) | + | σ ( u , σ 2 , W , Z ) | K 1 ( 1 + | Z | ) ,
such that there exists another positive, finite, constant K 2 for a different score matrix Z ˜ I × M such that the Lipschitz condition
| μ ^ ( u , W , Z ) μ ^ ( u , W , Z ˜ ) | + | σ ( u , σ 2 , W , Z ) σ ( u , σ 2 , W , Z ˜ ) | K 2 | Z Z ˜ | ,
Z ˜ R I × M is satisfied and
| μ ^ ( u , W , Z ) | 2 + σ ( u , σ 2 , W , Z ) 2 K 2 2 ( 1 + | Z ˜ | 2 ) ,
where σ ( u , σ 2 , W , Z ) 2 = i = 1 I j = 1 I | σ i j ( u , σ 2 , W , Z ) | 2 .
Remark 1.
The local Lipschitz condition and Assumption 1 guarantees that the SDE expressed in Equation (3) has a unique solution.
Assumption 2.
( Ω , F , F u Z , P ) represents the underlying probability space, where Ω is the sample space, F is the σ-algebra, F u Z denotes the filtration at the u t h over generated by an M × M -dimensional F u -Brownian motion B of the run Z with F u Z F u , and P is the probability measure. The control measure for the valuation of players, W , is an F u Z -adapted process that satisfies Assumption 1.
Remark 2.
Assumption 2 outlines the assumptions regarding the underlying probability space and ensures the existence of a unique fixed point. Together, Assumptions 1 and 2 ensure a unique solution to the SDE (3) by contraction mapping.

2.2. Derivation of the SDE

In this section, we conduct important derivations corresponding to the SDE expressed by Equation (3). First, by Definition 1 we define an infinitesimal generator, and then, by Definition 2 we define a characteristic-like quantum generator. Due to the presence of uncertainties regarding the environment, bowlers’ strategies, and pitch conditions, the run dynamics become extremely volatile. To obtain a stabilized solution we need Definition 2. Lemmas 1 and 2 give the appropriate structure of the characteristic-like quantum generator. This section ends with uncertainties related to the diffusion component σ ( u , σ 2 , W , Z ) .
Since, at the beginning of each innings each team starts with a zero score, the initial condition for match M is Z I × 1 = 0 I × 1 . Furthermore, we assume W is a Markov control. Therefore, there exists a measurable function h : [ 0 , U ] × C ( [ 0 , U ] : R I × M ) R I × M such that W ( u ) = h [ Z ( u , w ) ] . That is, in order to know W we need to know Z first and it cannot be exogenously specified.
Definition 1.
Assume Z ( u , w ) represents a non-homogeneous Fellerian semigroup on overs in R I × M . The infinitesimal generator A of Z ( u , w ) is given by
A h ( z ) = lim u 0 E u [ h ( Z ( u , w ) ) ] h ( z ( w ) ) u , z R I × M ,
where h : R I × M R I × M is a C 1 , 2 function, Z has a compact support, and at z ( w ) the limit exists where E u = E . | F u Z represents team T’s conditional expectation of run Z at over u. Furthermore, if the above Feller semigroup is homogeneous on overs, then A h is exactly equal to the Laplace operator.
Remark 3.
Definition 1 characterizes the behavior of the stochastic process of an infinitesimal over-interval [ u , τ ] [ 0 , U ] for all τ = u + ε such that ε = U / n , where over-interval [ 0 , U ] has been divided into n equal sub-over-intervals. This is used later in this paper to construct a Wick-rotated Schrödinger-type equation.
Remark 4.
For the SDE expressed in Equation (3) with a Fellerian semigroup, finding an explicit solution generally depends on the specific form of the SDE and the associated Fellerian semigroup. The easiest version is the linear SDE, where in Equation (3) μ ^ and σ are fixed. With a Fellerian semigroup, the explicit solution is
Z ( u , w ) = Z + u μ ^ + σ B ( u ) .
For an Ornstein–Uhlenbeck process represented by the equation
d Z ( u , w ) = θ μ ^ Z ( u , w ) d u + σ d B ( u ) ,
it has the explicit solution
Z ( u , w ) = Z exp { θ u } + μ ^ [ 1 exp { θ u } ] + σ 0 U exp θ ( U u ) d B ( u ) ,
where θ is some constant. The Fellerian semigroup associated with Z ( u , w ) has an explicit expression due to the Gaussian nature of the transition probabilities R f ( z ) p u , Z , z d z , where z represents the potential run that the process Z ( u , w ) could be in at u, f is a bounded continuous function, and p u , Z , z is the Gaussian density, expressed as
1 [ 2 π σ 2 [ 1 exp { 2 θ u } ] ] 1 / 2 exp [ z Z exp { θ u } μ ^ [ 1 exp { θ u } ] ] 2 2 σ 2 [ 1 exp { 2 θ u } ] .
In general, for many SDEs, finding an explicit expression for the Fellerian semigroup is challenging. The explicit form is often not available unless the SDE is simple or has special properties (e.g., linear coefficients, certain symmetries). For more complex or non-linear SDEs, explicit solutions and semigroup expressions typically require numerical methods or approximations. Therefore, unless a definite structure of Equation (3) is defined, it is very hard to find an explicit solution.
Given that h is a measurable function dependent on u, it can potentially exhibit extremely large values and instability. To stabilize W , we must apply the natural logarithmic transformation and establish a characteristic similar to a quantum operator, as outlined in Definition 2.
Definition 2.
Consider a Fellerian semigroup of Z ( u , w ) over a small interval of one-day overs [ u , u + ε ] such that ε 0 . Define a characteristic-like quantum operator for the process initiated at u as follows:
A h ( z ) = lim ε 0 log E u [ ε 2 h ( Z ( u , w ) ) ] log [ ε 2 h ( z ( w ) ) ] log E u ( ε 2 ) , f o r e v e r y z R I × M ,
where h is the same as Definition 1, E u denotes the conditional expectation of the run Z at the u t h over. For ε > 0 and a given fixed h, we consider the set of all open balls of the form B ε ( h ) within B (the collection of all open balls). As ε 0 , it follows that log E u ( ε 2 ) .
Remark 5.
In the literature on stochastic processes, Dynkin’s formula provides the expected value of any suitably smooth statistic of an Itô process at a stopping time. Hence, we discuss Dynkin’s formula in the context of run dynamics. Lemma 1 offers an expression for a characteristic-like quantum generator for a C 1 , 2 function h corresponding to the SDE (3).
Lemma 1.
(Dynkin formula) Suppose a Feller process Z ( u , w ) follows Assumptions 1 and 2, Definition 2, and Equation (3). For h C 1 , 2 , with the over-interval [ ν , ν ˜ ] and E ν [ ν ˜ ] < , we have
log E ν ˜ ν ˜ 2 h ( Z ν ˜ ) = log [ ν 2 h ( Z ν ) ] + log 1 + 1 ν 2 h ( Z ν ) × E ν ν ν ˜ u 2 i = 1 I μ ^ i h ( Z ) Z i + 1 2 i = 1 I j = 1 I ( σ σ T ) i j 2 h ( Z ) Z i Z j d u ,
where E ν is the conditional expectation of the run at the beginning of the over-interval, ν 2 h ( Z ν ) 0 , and with respect to the probability law R ν for Z ( u , w ) starting at Z ν we have that
R ν [ Z ν 1 F 1 , , Z ν m F m ] = P 0 [ Z ν 1 ν F 1 , , Z ν m ν F m ] ,
where the F i ’s are Borel sets.
Proof. 
See Appendix A. □
Finally, by [24],
A h ( z ) = i = 1 I μ ^ i Z i h ( Z ) + 1 2 i = 1 I j = 1 I ( σ σ T ) i j 2 h ( Z ) Z i Z j .
Equation (5) yields
log E ν ˜ [ ν ˜ 2 h ( Z ν ˜ ) ] = log ν 2 h ( Z ν ) + log 1 + 1 ν 2 h ( Z ν ) E ν ν ν ˜ u 2 A h ( z ) d u .
If A h ( z ) = 0 , then Equation (3) is a run trap.
Lemma 2.
Let h be a C 1 , 2 function. Then, for a small over-interval [ ν , ν ˜ ] with ε 0 and h ( Z ν ) 0 ,
A h ( z ) = log 1 + 1 h ( Z ν ) i = 1 I μ ^ i h ( Z ) Z i + 1 2 i = 1 I j = 1 I ( σ σ T ) i j 2 h ( Z ) Z i Z j .
Proof. 
See Appendix A. □
In Equation (3), the stochastic drift component μ ^ is replaced by a deterministic function μ such that
μ [ u , W ( u ) , Z ( u , w ) ] = A h ( z ) + μ ˜ [ u , W ( u ) , Z ( u , w ) , f ( Z u ) ] ,
where A h is a characteristic-like quantum operator of a Feller semigroup, f ( Z u ) is the probability distribution of Z ( u , w ) , and for a given over u, k 1 > 0 , k 2 > 0 , and l > 0 , we have μ ˜ : [ 0 , U ] × R I × R I × M × ( N γ 2 ( R I × M ) , ρ ) R I × M , that satisfies the following conditions:
| | μ ˜ ( u , W , Z , f ) μ ˜ ( u , W ^ , Z ^ , f ^ ) | | k 1 ( | | W W ^ | | ) + k 2 ( | | Z Z ^ | | ) + ρ ( f , f ^ ) ,
and
| | μ ˜ ( u , W , Z , f ) | | l ( 1 + | | W | | + | | Z | | + | | f | | γ ) ,
where γ ( z ) 1 + | | z | | , ( N γ 2 ( R I × M ) , ρ ) forms a complete metric space with the metric ρ ( f , f ^ ) [25].
Remark 6.
The condition expressed in Equations (8) and (9) can be used for generalized non-linear SDEs. Sometimes the metric ρ ( f , f ^ ) is called the Wasserstein measure.
Let the diffusion component σ ( u , σ 2 , W , Z ) be additively separable into σ 1 ( u , W , Z ) and σ 2 , where σ 2 = [ Φ g ^ U ( θ ) Φ ¯ g ^ U ( θ ) ] 1 / 2 , with Φ g ^ U ( θ ) being the characteristic function defined in Lemma 3 below. σ 1 ( u , W , Z ) consists of the venue of the game, the percentage of attendance of the home crowd, the type of one-day match, the amount of dew on the pitch, and the wind speed.
First, if team T is playing abroad, players have a harder time scoring a run than at home, and thinking of this they create extra mental strain on themselves. We assume that pressure p is a non-negative C 1 , 2 function p ( u , Z ) : [ 0 , U ] × R I × M R + I × M at match M + 1 such that if Z u 1 < E u 1 ( Z ) , then p takes a very high positive value. Second, we define the attendance rate as a positive finite C 1 , 2 function A ( u , W ) : [ 0 , U ] × R I R + I , with A / W > 0 and A / u 0 depending on if, at over u, player i with valuation W i W is still playing, or is out. Thirdly, assume the effect of a day or day–night one-day match is a function B ( Z ) R + I × M such that
B ( Z ) = 1 2 [ 1 2 E 0 ( Z D 2 ) + 1 2 E 0 ( Z D N 1 ) ] + 1 2 [ 1 2 E 0 ( Z D 1 ) + 1 2 E 0 ( Z D N 2 ) ] ,
where for i = 1 , 2 , E 0 ( Z D i ) is the conditional expectation of run of team T before the start of the day match M + 1 with runs at the i t h innings Z D i , and E 0 ( Z D N i ) is the conditional expectation of the runs before starting a day–night match M + 1 . Furthermore, if team T wins the toss, then it will go for the payoff 1 2 1 2 E 0 ( Z D 2 ) + 1 2 E 0 ( Z D N 1 ) , and the later part of Equation (10) otherwise. Finally, as the amount of dew on the grass and the wind speed at over u are ergodic, following [26] we assume this can be represented by a Weierstrass function Z e : [ 0 , U ] R , defined as
Z e ( u ) = α = 1 ( λ 1 + λ 2 ) ( s 2 ) α sin ( λ 1 + λ 2 ) α u ,
where s ( 1 , 2 ) is a penalization constant of weather at over u, λ 1 is the dew point measure, and λ 2 is the wind speed such that ( λ 1 + λ 2 ) > 1 .
Assumption 3.
σ 1 ( u , W , Z ) is a positive, finite part of the diffusion component in Equation (3) which satisfies Assumptions 1 and 2, and is defined as
σ 1 ( u , W , Z ) = p ( u , Z ) + A ( u , W ) + B ( Z ) + Z e ( u )   + ρ 1 p T ( u , Z ) A ( u , W ) + ρ 2 A T ( u , W ) B ( Z ) + ρ 3 B T ( Z ) p ( u , Z ) ,
where ρ j ( 1 , 1 ) is the j t h correlation coefficient for j = 1 , 2 , 3 , and A T , B T , and p T are the transpositions of A , B , and p which satisfy all conditions with Equations (10) and (11). As the ergodic function Z e comes from nature, team T does not have any control over it, and its correlation coefficient with other terms in Equation (12) is assumed to be zero.
The randomness comes from the delivery of the bowler of σ 2 of Equation (3) to the opposition team. There are two main types of bowler: pace bowlers and spinners. Pace bowlers have two components: the speed of the ball s R + I × U in miles per hour; and the curvature of the bowling path, measured by the dispersion from the straight line connecting the two middle stumps, measured as x R + I × U inches. Assume a payoff function A 1 ( s , x , G ) : R + I × U × R + I × U × [ 0 , 1 ] R 2 I × U such that, at over u, the expected payoff after correctly guessing a ball is E u A 1 ( s , x , G ) . Here, G is a guess function, where G = 1 if the batsman correctly predicts the bowler’s delivery, G = 0 if the prediction is incorrect, and G falls within ( 0 , 1 ) for a partial guess.
Conversely, there exists a payoff function A 2 for a leg spinner, defined as A 2 ( s , x , θ 1 , G ) : R + I × U × R + I × U × ( π / 2 , π ] × [ 0 , 1 ] R 2 I × U , where θ 1 is the angle between the line from the bowler’s hand to the bowl’s first drop on the crease and the line connecting the second point to the bat. The expected payoff for a run at over u when the bowler is a leg spinner is E u A 2 ( s , x , θ 1 , G ) . Lastly, the payoff function for an off-spinner is A 3 ( s , x , θ 1 , θ 2 , G ) : R + I × U × R + I × U × ( π / 2 , π ] × ( 0 , π / 36 ] × [ 0 , 1 ] R 2 I × U , where θ 2 is the allowable elbow extension during a delivery, with the expectation at over u as E u A 3 ( s , x , θ 1 , θ 2 , G ) . If θ 2 is more than π / 36 , the off-spinner achieves extra spin to make a teesra. As a batsman does not know who is coming to bowl after a 6-ball over is completed, their total expected payoff function at over u is A ( s , x , θ 1 , θ 2 , G ) = 1 E u A 1 ( s , x , G ) + 2 E u A 2 ( s , x , θ 1 , G ) + 3 E u A 3 ( s , x , θ 1 , θ 2 , G ) , where for j { 1 , 2 , 3 } , j is the probability of a pace bowler, a leg spinner, and an off spinner, with 1 + 2 + 3 = 1 . Therefore, A ( s , x , θ 1 , θ 2 , G ) : R + I × U × R + I × U × ( π / 2 , π ] × ( 0 , π / 36 ] × [ 0 , 1 ] R 2 I × U .
Define the strategy space as a conformal field such that the plane C I × U = R 2 I × U is a subset of the I × U -dimensional two-sphere S I × U = C I × U { } , a one-point compactification of the plane C I × U [27]. Let U : = { A C I × U : | A | < 1 } be a unit sphere and for a compact simple path γ U ¯ I × U { 0 } I × U with its end point at γ U there exists a unique conformal homeomorphism h γ : U I × U U I × U γ such that h γ ( 0 ) = 0 and h γ ( 0 ) R + I × U , where h γ = h / γ [27]. For another compact simple path γ ^ U ¯ I × U assume the two end points are at 0 and γ ^ U . Suppose γ ^ r is the arc of the path joining 0 and γ ^ U for each point r γ ^ { 0 } I × U . If h ˜ ( r ) : = log h γ ^ r ( 0 ) , then h ˜ is a homeomorphism from γ ^ { 0 } I × U onto [ 0 , U ) . Suppose r ( u ) is the inverse map r : [ 0 , U ) γ ^ , and set h ( A , u ) = h u ( A ) : = h γ ^ r ( u ) ( A ) , and for another multivariate function in unit sphere g u ( A ) with g u ( 0 ) = 0 I × U such that ϕ u ( A ) = g u 1 ( h u ( A ) ) , where ϕ u ( A ) is a multivariate mapping of the unit sphere U . Loewner’s slit mapping theorem and the Schramm–Loewner-type [27] evolution equation gives
u g u ( A ) = g u ( A ) κ Υ η ( A ) W u + g u ( A ) κ Υ η ( A ) W u g u ( A ) ,
where κ [ 0 , ) is a positive diffusivity parameter, W u is a Brownian motion on the two-sphere S I × U , and Υ η ( A ) ( 0 , ) is the area of a von Koch snowflake curve with total number of iterations η ( A ) [28].
To construct the Υ η ( A ) function, suppose that the total number of iterations is defined as η ( A ) : R I × U R + I × U { 0 } I × U such that η / A > 0 and 2 η / A 2 > 0 . The main reason behind this assumption is that if the payoff A is high, then a batsman can hit a 6 from a delivery, and then, he needs to perform more iterations of the von Koch snowflake strategy space of a bowler. Let us denote θ ^ η as the number of sides, θ ˜ η as the length of each side, and Ξ η as the perimeter of the strategy space of the bowler before the u t h over. Then, θ ^ η = 3 4 η ( A ) , θ ˜ η = ( 1 / 3 ) η ( A ) , and Ξ η = θ ^ η θ ˜ η = 3 ( 4 / 3 ) η ( A ) , such that A = 0 implies η ( A ) = 0 , and then, θ ^ η = 3 , which means the strategy space becomes a triangle, which we denote by Δ . Therefore, by [28], the area of the von Koch snowflake Υ η ( A ) = Υ η 1 ( A ) + ( 1 / 3 ) ( 4 / 9 ) η ( A ) Δ , and finally, for Υ 0 ( A ) = Δ , we have Υ η ( A ) = Δ 5 8 3 ( 4 9 ) η ( A ) , which results in the diffusion part of Equation (13).
Now, for a fixed Υ η define a shifted conformal map g ^ u ( A ) κ Υ η W u g u ( A ) such that g ^ u 1 ( ω ) = d g u 1 ( κ Υ η W u ω ) , where = d indicates the equality of the distributions of the stochastic process. Therefore,
g ^ u ( A ) = κ Υ η ( A ) W u g u ( A ) .
Equation (13) implies
g ^ u ( A ) = 1 g ^ u ( A ) κ Υ η ( A ) W u + g ^ u ( A ) 2 u κ Υ η ( A ) W u g ^ u ( A ) W u ,
as two processes are generated from the same Brownian motion W u , and where g ^ u ( A ) 0 . As both the drift and diffusion components of Equation (14) have the Brownian motion W u , we define a function g ˜ ( W u ) = W u , and assuming it is a Feller process on a two-sphere, we define an infinitesimal generator L g ˜ u on it such that
g ^ u ( A ) = 1 g ^ u ( A ) κ Υ η ( A ) L g ˜ u + g ^ u ( A ) 2 u κ Υ η ( A ) L g ˜ u g ^ u ( A ) W u .
Lemma 3.
Let k ( g ^ ) = exp ( ı θ g ^ ) be a C 2 ( S I × U ) function and let g ^ u satisfy the stochastic differential equation specified in Equation (15). If the unique bounded function : [ 0 , U ] × S I × U S I × U satisfies the partial differential equation
L ( u , g ^ ) = u ( u , g ^ ) 1 g ^ ( A ) κ Υ η ( A ) L g ˜ + g ^ ( A ) 2 g ^ ( u , g ^ )   1 2 κ Υ η ( A ) L g ˜ u g ^ 2 2 g ^ 2 ( u , g ^ ) = 0 ,
for all u [ 0 , U ] and g ^ S I × U with terminal condition ( U , g ^ ) = k ( g ^ ) given by ( u , g ^ ) = E [ k ( g ^ U ) | g ^ u = g ^ ] , then the characteristic function is
Φ g ^ U ( θ ) = exp ı θ exp 1 g ^ 2 ( A ) κ Υ η ( A ) L g ˜ + g ^ ( A ) 2 U 1 4 θ 2 κ Υ η ( A ) L g ˜ u κ Υ η ( A ) L g ˜ + g ^ 2 × exp 2 g ^ 2 ( A ) κ Υ η ( A ) L g ˜ + g ^ ( A ) 2 U 1 ,
where ı is an imaginary number and θ R .
Proof. 
See Appendix A. □
Remark 7.
Lemma 3 determines the characteristic function of the run dynamics Z defined on the fractal surface. For this, we first develop a Schramm–Loewner evolution Equation (14), generate an infinitesimal generator L g ˜ u , and finally, determine this generator for a unique bounded function l.
Since the characteristic function in Lemma 3 is in the complex plane, we need to multiply by its conjugate, and then, take the square root to obtain σ 2 , which is in S I × U .

3. Computation of the Control Coefficient

In this section, we use the Feynman-type path integral approach to determine the control coefficient β i . We consider two cases: match without interruption by rain and with interruption by rain. Equations (17) and (27) determine this value. The objective is to maximize Equation (16) subject to the run dynamics expressed by Equation (3). At the end of each subsection, an example is given to show applications of Propositions 1 and 3. The central part of our construction is to choose an appropriate g function so that the Lagrangian control problem has a unique solution. The advantage of this method over the HJB equation approach is that this avoids the computation of the very complicated value function. Even the Feynman–Kac approach faces the same problem. Furthermore, the Feynman–Kac approach requires integrability everywhere of the value function on Hilbert space.

3.1. Match without Interruption

Following [3], we know that in a one-day match each over can be represented as a multiple of 1 / 6 , which makes u a continuous variable. The objective function is
max { W i W } Z ¯ T ( W , u ) = max { W i W } E 0 U i = 1 I m = 1 M exp ( ρ i m ) β i W i ( u ) Z i m ( u , w ) d u ,
where U = 50 . In Equation (16), β i is the coefficient of the i t h player’s control.
Proposition 1.
Suppose the objective of team T is to maximize Equation (16) subject to (3), such that Assumptions 1 and 2 are satisfied. Then, the coefficient of the i t h player is obtained by solving the following equation,
i = 1 I m = 1 M exp ( ρ i m ) β i Z i m ( u , w ) + g [ u , Z ( u , w ) ] Z μ [ u , W ( u ) , Z ( u , w ) ] W W W i + 1 2 i = 1 I j = 1 I σ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] W W W i 2 g [ u , Z ( u , w ) ] Z i Z j = 0 ,
with respect to β i , where the initial condition before the first ball has been delivered is 0 I × 1 . Furthermore, if β i = β j = β , for all i j , then
β ( Z ) = i = 1 I m = 1 M exp ( ρ i m ) Z i m ( u , w ) 1 g [ u , Z ( u , w ) ] Z μ [ u , W ( u ) , Z ( u , w ) ] W W W i + 1 2 i = 1 I j = 1 I σ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] W W W i 2 g [ u , Z ( u , w ) ] Z i Z j ,
where g [ u , Z ( u , w ) ] C 2 [ 0 , 50 ] × R I × M , with Y ( u ) = g [ u , Z ( u , w ) ] , is a positive, non-decreasing penalization function vanishing at infinity which substitutes the run dynamics such that Y ( u ) is an Itô process.
Proof. 
See Appendix A. □
Example 1.
Let Equation (3) have the form
d Z ( u , w ) = μ 1 Z ( u , w ) d u + μ 2 [ u , W ( u ) ] d u + σ [ u , σ 2 , W ( u ) ] d B ( u ) ,
where μ 1 and μ 2 are two I × M -dimensional matrices with constants and controls such that the total drift of the system is
μ = μ 1 Z ( u , w ) + μ 2 [ u , W ( u ) ] .
Multiplying exp ( μ 1 u ) on both sides of Equation (18) yields
exp ( μ 1 u ) d Z ( u , w ) exp ( μ 1 u ) μ 1 Z ( u , w ) d u = exp ( μ 1 u ) μ 2 [ u , W ( u ) ] d u + σ [ u , σ 2 , W ( u ) ] d B ( u ) .
The left-hand side of Equation (20) can be written as d exp ( μ 1 u ) Z ( u , w ) . Clearly, exp ( μ 1 u ) is the integrating factor of the SDE (18), and the g ^ function is exp ( μ 1 u ) Z ( u , w ) . To find the solution of the SDE (18) we use the Itô formula. Substituting
d exp ( μ 1 u ) Z ( u , w ) = ( μ 1 ) exp ( μ 1 u ) Z ( u , w ) d u + exp ( μ 1 u ) d Z ( u , w )
into Equation (20) yields
exp ( μ 1 U ) Z ( U , w ) Z ( 0 , w ) = 0 U exp ( μ 1 u ) μ 2 [ u , W ( u ) ] d u + 0 U exp ( μ 1 u ) σ [ u , σ 2 , W ( u ) ] d B ( u ) .
Implementing Theorem 4.1 . 5 of [24] on Equation (21) yields
Z ( U , w ) = exp ( μ 1 U ) [ Z ( 0 , w ) + exp ( μ 1 U ) σ [ U , σ 2 , W ( U ) ] d B ( U )   + 0 U exp ( μ 1 u ) μ 2 [ u , W ( u ) ] d u + σ [ u , σ 2 , W ( u ) ] B ( u ) d u ] .
Since, g ^ ( u , Z ) = exp ( μ 1 u ) Z ( u , w ) gives a solution to the SDE (18), for a given λ we choose the g = λ g ^ function to find the solution of the maximization problem. Since, g u = λ ( μ 1 ) exp ( μ 1 u ) Z ( u , w ) , g Z = λ exp ( μ 1 u ) , and 2 g Z T Z = 0 , Equation (17) implies
β ( Z ) = i = 1 I m = 1 M exp ( ρ i m ) Z i m ( u , w ) 1 λ exp ( μ 1 u ) μ 2 [ u , W ( u ) ] W W W i .
After assuming μ 2 [ u , W ( u ) ] = W A 1 T + A 2 , where A 1 is an M-dimensional vector of constants and A 2 is an I × M -dimensional constant matrix, yields
β ( Z ) = λ A 1 exp ( μ 1 u ) i = 1 I m = 1 M exp ( ρ i m ) Z i m ( u , w ) .

3.2. Match Interrupted by Rain

Rain plays an important part in a match. These are the two things that might happen; if the game is stopped due to light rain, after the rain stops the game resumes, and for a second-innings batting team a run target is fixed by the Duckworth-Lewis method. On the other hand, if the rain is heavy, the match will be canceled. If this match is a group-level match, each team receives equal points; if the match is a quarterfinal, semi-final, or final, there is another reserved day in the future. In this section, we consider all these scenarios for mathematical modeling. Suppose, for a U-over one-day match, the game stops after U ˜ 1 overs because of the rain. After that, there are two possibilities: first, if the rain is heavy, the game will not resume; secondly, if the rain is not heavy and stops after a certain point of time, then, after drying the field, the match might be resumed. Based on the severity of the rain and the equipment used to dry the field the match resumes for ( U ˜ , U ε ] overs, where ε 0 .
Definition 3.
Take a probability space ( Ω , F , F u Z , P ) with sample space Ω, σ-algebra F , filtration at the u t h over of run Z as { F u Z } F u , a probability measure P , and a Brownian motion for rain B u with the form B u 1 ( E ) such that for u [ U ˜ , U ε ] , E R is a Borel set. If U ˜ is the game-stopping over and b R is a rain measure, then U ˜ : = inf { u 0 | B u > b } .
Following a rain delay, numerous environmental changes can be observed on the field. The added moisture in the air allows fast bowlers to generate more swing. Additionally, since the pitch remains damp, the ball’s behavior can be unpredictable for the batsman. It is also challenging to completely dry the outfield, leaving dew on the grass, which slows the ball after a shot is played. As a result, pace bowlers benefit from these conditions, increasing their chances of taking wickets. Conversely, spinners are at a disadvantage because gripping the ball becomes difficult, making it easier for batsmen to hit. Moreover, the slippery outfield makes it harder to stop the ball before it reaches the boundary, leading to shifts in run scoring dynamics after the rain.
Definition 4.
Let δ u : [ U ˜ , U ε ] ( 0 , ) be a C 2 ( u [ U ˜ , U ε ] ) over-process of a one-day match such that it replaces a stochastic process by Itô’s lemma. Then, δ u is a stochastic gauge of that match if U ˜ : = u + δ u is a stopping over for each u [ U ˜ , U ε ] and B u > b , where u is the new over after resampling the stochastic interval [ U ˜ , U ε ] .
To grasp Definition 4, consider that P ˜ = { x 0 , x 1 , , x p } is a partition of a stochastic process over the interval [ U ˜ , U ε ] , with sampling points { u i } i = 1 p such that u i [ x i 1 , x i ] ( u i δ u i , u i + δ u i ) [ U ˜ , U ε ] . The significance of employing a stochastic gauge lies in the fact that, once the game begins at over U ˜ , the conditions are such that, in the worst-case scenario, rain could begin again shortly after U ˜ , leading to permanently stopping the game. Hence, sample points u are used instead of u because we only consider proceeding to over u + 1 if the rainfall measure is below b millimeters, which makes u a stochastic over. In this context, the sample point u and the function δ u replace u. In Itô’s sense, u , with its associated δ u , can be thought of as a C 2 function, where the Laplacian operator with respect to u encodes all information about u. Additionally, since the game resumes at over U ˜ and concludes at U ε , for any ε 0 , these two overs are considered stopping points such that [ U ˜ , U ε ] [ U ˜ , U ] .
Definition 5.
Given a stochastic over-interval I ^ = [ U ˜ , U ε ] R , a stochastic tagged partition of a one-day match is a finite set of ordered pairs D = { ( u i , I ^ i ) : i = 1 , 2 , , p } such that I ^ i = [ x i 1 , x i ] [ U ˜ , U ε ] , u i I ^ i , i = 1 p I ^ i = [ U ˜ , U ε ] , and for i j we have I ^ i I ^ j = { } . The point u i is the tag partition of the stochastic over-interval I ^ i .
Definition 6.
If D = { ( u i , I ^ i ) : i = 1 , 2 , , p } is a tagged partition of stochastic over-interval I ^ and δ u is a stochastic gauge on I ^ , then D is a stochastic δ-fine if I ^ i δ u ( u i ) for all i = 1 , 2 , , p , where δ ( u ) = ( u δ u ( u ) , u + δ u ( u ) ) .
For a tagged partition as described in Definitions 5 and 6, and a function f ˜ : [ U ˜ , U ε ] × R 2 I × U ^ × Ω R I × U ^ , the Riemann sum of D is defined by
S ( f ˜ , D ) = ( D δ ) f ˜ ( u , I ^ , W , Z ) = i = 1 p f ˜ ( u i , I ^ i , W , Z ) ,
where D δ is a δ -fine division of R I × U ^ with the point-cell function f ˜ ( u i , I ^ i , W , Z ) = f ˜ ( u i , W , Z ) ( I ^ i ) , where is the length of the over-interval and U ^ = ( U ε ) U ˜ [29].
Definition 7.
The integrable function f ˜ ( u , I ^ , W , Z ) based on a continuous over with
a = U ˜ U ε f ˜ ( u , I ^ , W , Z )
is considered stochastically Henstock–Kurzweil integrable on I ^ if, for a given vector ε ^ > 0 , there exists a stochastic δ-gauge in [ U ˜ , U ε ] such that for every stochastic δ-fine partition D δ in R I × U ^ , we have E u a ( D δ ) f ˜ ( u , I ^ , W , Z ) < ε ^ , where E u represents the conditional expectation on the run Z at sample over u [ U ˜ , U ε ] of a non-negative function f ˜ after the rain stops.
Proposition 2.
Define h = exp ε ˜ E u u u + ε ˜ f ˜ ( u , I ^ , W , Z ) Ψ u ( Z ) d Z . If, for a small sample over-interval [ u , u + ε ˜ ] , 1 N u R 2 I × U ^ × I h exists for a conditional gauge γ = [ δ , ω ( δ ) ] , then the indefinite integral of h , H ( R 2 I × U ^ × I ) = 1 N u R 2 I × U ^ × I h , exists as a Stieltjes function in E ( [ u , u + ε ˜ ] × R 2 I × U ^ × Ω × R I ) for all N u > 0 .
Proof. 
See Appendix A. □
Remark 8.
Proposition 2 shows the existence of a Stieljes integral. This proposition, with Corollary 1, guarantees the existence of a stochastic Itô-Henstock–Kurtzweil–McShane–Feynman–Liouville-type path integral in run dynamics. Later, in Proposition 3, we use this integral to obtain the Wick-rotated Schrödinger-type equation. Uncertainty due to the severity of rain makes the strategy space fractal, and it can be glued to an 8 / 3 -Liouville quantum gravity surface.
Corollary 1.
If h is integrable on R 2 I × U ^ × I as in Proposition 2, then for a given small continuous sample over the interval [ u , u ] with ε ˜ = u u > 0 , there exists a γ-fine division D γ in R 2 I × U ^ × I such that
| ( D γ ) h [ u , I ^ , I ^ ( Z ) , W , Z ] H ( R 2 I × U ^ × I ) | 1 2 | u u | < ε ˜ ,
where I ^ ( Z ) is the interval of run Z in R 2 I × U ^ × I . This integral is a stochastic Itô-Henstock–Kurtzweil–McShane–Feynman–Liouville-type path integral in run dynamics of a sample over after the beginning of a one-day match after rain interruption.
The objective function after rain is
max { W i W } Z ^ T ( W , u ) = max { W i W } E U ˜ U ε i = 1 I m = 1 M exp ( ρ i m ) β i W i ( u ) Z i m ( u , w ) d u .
Consider a domain D in a 2-sphere S . First, we know that after the rain the pacer bowlers will have extra swing, measured by the difference in the average swing before rain compared to after rain, say, φ 1 D . For p partitions of [ U ˜ , U ε ] , define φ 1 = i = 1 p φ 1 i α i , where φ i is the difference at the u i th over and α i is the orthonormal basis. Second, the measure of the slowness of the outfield is φ 2 = i = 1 p φ 2 i β ˜ i , where φ 2 i is the difference in the speed of a ball after a batsman offers a shot and β ˜ i is an orthonormal basis. As φ 1 i and φ 2 i vary in each u i , we assume they are random variables, with the Dirichlet inner products ( φ 11 , φ 12 ) : = ( 2 π ) 1 D φ 11 ( u ) . φ 12 d u and ( φ 21 , φ 22 ) : = ( 2 π ) 1 D φ 21 ( u ) . φ 22 d u such that φ : = φ 1 + φ 2 , where ∇ is the gradient vector. Thus, φ represents a centered Gaussian free field on a bounded, simply connected domain D with zero boundary conditions [30]. The pairs ( D , φ ) and ( D ^ , φ ^ ) are considered equivalent in 8 / 3 -Liouville quantum gravity if there exists a conformal map ϖ : D ^ D such that φ ^ = φ ϖ + Q log | ϖ | , where Q = 3 / 2 + 2 / 3 and γ = 8 / 3 [31]. The significance of the 8 / 3 -Liouville quantum gravity surface lies in the fact that its natural measure is the limit of regularized versions of exp 8 / 3 φ ( u ) d u , with d u being a stochastic Henstock-type measure in D [32]. When incorporating this into the run dynamics, with λ u as the constant multiplier of 8 / 3 -Liouville quantum gravity, Equation (3) transforms into
d Z ( u , w ) = μ [ u , W ( u ) , Z ( u , w ) ] d u + exp [ 8 / 3 φ ( u ) ] d u + σ ^ [ u , σ 2 , W ( u ) , Z ( u , w ) ] d B ( u ) ,
and the Liouville-like action function on the run dynamics after the match starts after the rain is
L U ˜ , U ε ( Z ) = U ˜ U ε E u i = 1 I m = 1 M exp ( ρ i m ) β i W i ( u ) Z i m ( u , w ) d u + λ u [ W ( u + d u ) ] d u W ( u ) d u μ [ u , W ( u ) , Z ( u , w ) ] d u exp [ 8 / 3 φ ( u ) ] d u σ ^ [ u , σ 2 , W ( u ) , Z ( u , w ) ] d B ( u ) ] .
The stochastic part of Equation (25) becomes σ ^ as λ ^ 1 > λ 1 . Equation (26) follows Definition 7 such that a = L U ˜ , U ε ( Z ) and it is integrable according to Corollary 1.
Proposition 3.
If team T’s objective is to maximize Equation (24) subject to the run dynamics in Equation (25) such that Assumptions 1–3 hold with Lemma 3, Proposition 2, and Corollary 1, then after a rain stoppage under a continuous sample over the system of the match, the coefficient is
β ( Z ) = i = 1 I m = 1 M exp ( ρ i m ) Z i m ( u , w ) 1   × g a [ u , Z ( u , w ) ] Z { μ [ u , W ( u ) , Z ( u , w ) ] + exp [ φ ( u ) 8 / 3 ] } W W W i + 1 2 i = 1 I j = 1 I σ ^ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] W W W i 2 g a [ u , Z ( u , w ) ] Z i Z j ,
where β i = β j = β for all i j ; Z U ˜ is the initial run condition; and function
g a u , Z ( u , w ) C 0 2 [ U ˜ , U ε ] × R 2 I × U ^ × R I
with Y ( u ) = g a u , Z ( u , w ) is a positive, non-decreasing penalization function vanishing at infinity which substitutes for the run dynamics such that Y ( u ) is an Itô process.
Proof. 
See Appendix A. □
Example 2.
Similar to Example 1, here we construct a g a function so that it maximizes Equation (24) subject to the SDE (25). Assume the SDE given by Equation (25) exhibits the form
d Z ( u , w ) = μ 1 Z ( u , w ) d u + μ 2 [ u , W ( u ) ] d u + exp [ 8 / 3 φ ( u ) ] d u + σ u , σ 2 , W ( u ) d B ( u ) ,
where μ 1 and μ 2 are two I × M -dimensional matrices with constants and controls such that the total drift of the system is
μ = μ 1 + μ 2 [ u , W ( u ) ] .
By assuming g ˜ a = exp ( μ 1 u ) Z ( u , w ) , the solution to Equation (25) becomes
Z ( U ε , w ) = exp { μ 1 ( U ε ) } [ Z ( U ˜ , w ) + exp μ 1 ( U ε ) σ [ U ε , σ 2 , W ( U ε ) ] d B ( U ε ) + U ˜ U ε exp ( μ 1 u ) μ 2 u , W ( u ) ] d u + exp [ 8 / 3 φ ( u ) ] d u + σ [ u , σ 2 , W ( u ) ] B ( u ) d u ] .
For a given λ choose g a = λ g ˜ a . Therefore, Equation (27) implies
β ( Z ) = i = 1 I m = 1 M exp ( ρ i m ) Z i m ( u , w ) 1 × λ exp ( μ 1 u ) { μ [ u , W ( u ) , Z ( u , w ) ] + exp [ φ ( u ) 8 / 3 ] } W W W i .
Assuming μ 2 [ u , W ( u ) ] = W A 1 T + A 2 , where A 1 is an M-dimensional vector of constants and A 2 is an I × M -dimensional constant matrix yields
β ( Z ) = λ A 1 exp ( μ 1 u ) i = 1 I m = 1 M exp ( ρ i m ) Z i m ( u , w ) .
Remark 9.
Although in Example 2 8 / 3 Liouville quantum gravity has been introduced, the optimal coefficient of control has remained the same as in Example 1.

4. Analysis of Real Data

In this section, we perform an analysis of real data from the Indian men’s national cricket team’s performances in their last six matches against Australia and New Zealand. We have used the database [33]. This database provides ball-by-ball match data for men’s and women’s test matches, one-day internationals, Twenty20 internationals, some other international T20s, and various club competitions. In this section, the last six one-day international match runs are considered. Although the theoretical analysis is based on overs, ball-by-ball runs have been considered because of extra balls delivered as a result of no-balls and wide balls. The reason behind choosing the last six matches is that India played against two of the toughest teams: Australia (from 17 March to 22 March 2023) and New Zealand (from 18 January to 24 January 2023). We assume the run dynamics as expressed in Equation (18). Since the drift part of the equation is μ 1 Z ( u , w ) + μ 2 , a linear regression of Z on u yields μ 1 = 0.0009843 and μ 2 = 0.8563 . Then, performing a combined standard deviation of the total individual runs of all six matches, we obtain σ = 1.288 .
Figure 1 represents the simulation study based on the SDE represented by Equation (18). Since a batsman cannot score more than six runs, we limit our y-axis to six (the possibility of hitting a six from a no-ball has been omitted since this case has never occurred in the last six matches). Figure 2 represents ball-by-ball runs scored by Indian batsmen in the last six matches. Each color represents one match. In Figure 1 and Figure 2, after 300 balls there are multiple spikes of sixes because during slog overs batsmen have to score as many as possible to ensure victory. Since extra balls due to no-balls and wide balls are counted, the total amount of balls bowled in every match is 333. The simulation in Figure 1 gives a good prediction of the run dynamics.
Figure 3 compares the actual runs with the simulated SDE for the last one-day international played by India against Australia. To determine the drift coefficient of Equation (18), a linear regression analysis was performed like above, and we obtain μ 1 = 0.000674 and μ 2 = 0.8567 . The standard deviation of runs at the last one-day international is σ = 1.22636 . In Figure 3, blue and red lines represent the simulation study and the actual runs per ball, respectively. Finally, Figure 4 represents the relationship between the optimal control coefficient and the total number of balls delivered. Equations (23) and (31) are used to find the optimal β . We assume the discount rate ρ i for all players is fixed at 0.01 , the Lagrangian multiplier λ = 1 , and A 1 = μ 2 . We observe that as a match comes to an end the value of β falls. Intuitively, as more overs have been delivered, the result becomes less uncertain. Therefore, control strategies become rather deterministic instead of stochastic.

5. Concluding Remarks

In this paper, we use a Feynman-type path integral technique to determine the optimality coefficient β ( Z ) for a one-day match with rain interruption and no interruptions. This coefficient tells us how to select a player to bat at a certain position based on the condition of the match. We consider different types of environmental conditions such as the wind speed, the moisture on the field, the speed of the ball on the outfield, extra swing from the bowler, and how hard it is for a spinner to grip the ball during a delivery. In the second part, we focus on the more volatile environment after the rain stops. We have assumed that after the rain stoppage the occurrence of each over strictly depends on the amount of rain at that sample over. Using Itô’s lemma, we define a δ u -gauge which generates a sample over u , instead of an actual over u, that is assumed to follow a Wiener process. Furthermore, we assume the strategy space of a bowler from the opposition team has an 8 / 3 -Liouville-like quantum gravity surface, and we construct a stochastic Itô-Henstock–Kurzweil–McShane–Feynman–Liouville-type path integral to solve for the optimality coefficient.
Examples 1 and 2 show the same expression for β ( Z ) , which leads to the conclusion that a rain interruption cannot affect the control coefficient. Intuitively, as the accurate prediction of rain is almost impossible before the game starts, a team sticks to their strategies based on the opposition players’ previous performances. Furthermore, in Figure 4 we see the value of the coefficient reduces as more balls are delivered. Intuitively, as more balls are delivered, the result of the match is more certain, and a team’s control over different strategies becomes less effective. Finally, in Section 4, Figure 1 gives a good prediction of the actual run dynamics of the past six matches played by India against Australia and New Zealand (i.e., representation in Figure 2). In future research, this method can be used for stochastic control problems involving infinite-dimensional state space.

Author Contributions

P.P. takes almost sole responsibility for this paper. A.M.P. performed some editing to improve the quality of this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

Database [33] have been used to write this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Proof of Lemma 1

Suppose the measure of valuation of all the players in team T is W ( u ) = h [ Z ( u , w ) ] , where h is a C 1 , 2 function. Here, Markov control W ( u ) can be written in terms of Z , because if player i scores more runs then his reputation will be higher as a batsman and W ( u ) will take a very high value. Hence, instead of abusing the notation, we can directly say that W ( u ) = h ( Z ) . For all Z i Z , where i = 1 , 2 , , I , applying Itô’s formula on W ( u ) yields
d W ( u ) = i = 1 I μ ^ i h ( Z ) Z i d u + i = 1 I h ( Z ) Z i ( σ d B ) i + 1 2 i = 1 I j = 1 I 2 h ( Z ) Z i Z j ( σ σ T ) i j d u .
After using the integral form of ( σ d B ) i ( σ d B ) j = ( σ σ T ) i j d u , Equation (A1) multiplied by u 2 is
u 2 h ( Z U ) = u 2 h ( Z 0 ) + 0 U u 2 i = 1 I μ ^ i h ( Z ) Z i + 1 2 i = 1 I j = 1 I ( σ σ T ) i j 2 h ( Z ) Z i Z j d u + 0 U u 2 i = 1 I h ( Z ) Z i ( σ d B ) i .
Subdivide the entire [ 0 , U ] over a one-day match into small over-intervals [ ν , ν + ε ] such that ε 0 and define ν + ε = ν ˜ . Therefore,
E ν ˜ ν ˜ 2 h ( Z ν ˜ ) = ν 2 h ( Z ν ) + E ν ν ν ˜ u 2 i = 1 I μ ^ i h ( Z ) Z i + 1 2 i = 1 I j = 1 I ( σ σ T ) i j 2 h ( Z ) Z i Z j d u + E ν ν ν ˜ u 2 i = 1 I h ( Z ) Z i ( σ d B ) i .
Taking natural logarithms on both sides yields
log E ν ˜ [ ν ˜ 2 h ( Z ν ˜ ) ] = log [ ν 2 h ( Z ν ) ] + log 1 + 1 ν 2 h ( Z ν ) E ν ν ν ˜ u 2 i = 1 I μ ^ i h ( Z ) Z i + 1 2 i = 1 I j = 1 I ( σ σ T ) i j 2 h ( Z ) Z i Z j d u + 1 ν 2 h ( Z ν ) E ν ν ν ˜ u 2 i = 1 I h ( Z ) Z i ( σ d B ) i .
Assume K ( Z u ) = u 2 κ ( Z u ) is a bounded Borel measurable function, where σ u 2 Z h ( Z u ) u 2 κ ( Z u ) , and for a finite M we have | K ( Z u ) | M . For all integers m and a simple function ϕ [ u < ν ˜ ] we have
E ν ν ν ˜ m K ( Z u ) d B u E ν 0 ν ˜ m K ( Z u ) d B u = E ν 0 m ϕ [ u < ν ˜ ] K ( Z u ) d B u = 0 ,
as ϕ [ u < ν ˜ ] and K ( Z u ) are H u -measurable, where H u is the σ -algebra generated by u. Moreover,
E ν ν ν ˜ m K ( Z u ) d B u 2 E ν 0 ν ˜ m K ( Z u ) d B u 2 = E ν 0 ν ˜ m K 2 ( Z u ) d u M 2 E ν [ ν ˜ ] < .
As at the beginning of the over-interval the batsman does not know what kind of ball is going to be delivered, their conditional expectation with respect to the runs at ν should be same as at ν ˜ . Hence, in Equation (A4) we assume E ν [ ν ] = E [ ν ˜ ] . From the above argument we can say that ν ν ˜ m K ( Z u ) d B u m > 0 is uniformly integrable with respect to the probability law starting at Z ν and is defined as R ν . Finally, taking the limit with respect to m, Equation (A3) becomes
0 = lim m E ν ν ν ˜ m K ( Z u ) d B u = E ν lim m ν ν ˜ m K ( Z u ) d B u = E ν ν ν ˜ K ( Z u ) d B u ,
so that
log E ν ˜ [ ν ˜ 2 h ( Z ν ˜ ) ] = log [ ν 2 h ( Z ν ) ] + log 1 + 1 ν 2 h ( Z ν ) E ν ν ν ˜ u 2 i = 1 I μ ^ i h ( Z ) Z i + 1 2 i = 1 I j = 1 I ( σ σ T ) i j 2 h ( Z ) Z i Z j d u .
This completes the proof. □

Appendix A.2. Proof of Lemma 2

By [24], the terms inside the first bracket on the right-hand side of Equation (6) can be replaced by A h ( z ) . If z R I × M is a run trap, then A h ( z ) = 0 , and A h ( z ) = log 1 = 0 . Let D 0 be an open and bounded set such that z D 0 , and let h extend outside D 0 . Since h is a C 1 , 2 function, A ( h ) = A h ( z ) = 0 . If z R I × M is not a trap, then consider a bounded open set D 1 containing z such that E ν [ ν 2 ] < . Lemma 1 implies
lim ε 0 log E ν ˜ [ ν ˜ 2 h ( Z ν ˜ ) ] log [ ν 2 h ( Z ν ) ] log E ν [ ν 2 ] log [ 1 + ν 2 A h ( z ) ] ν 2 h ( z ν ) = lim ε 0 E ν log 1 + ν ν ˜ u 2 A h ( Z ) d u E ν log 1 + ν ν ˜ u 2 A h ( Z ) d u ν 2 h ( z ν ) log E ν [ ν 2 ] = lim ε 0 E ν log 1 ν 2 h ( z ν ) log E ν [ ν 2 ] lim ε 0 sup w D 1 A f ( z ) A f ( w ) = 0 ,
for | z w | < | ξ | , where for a finite positive number η , define | ξ | η ε [ Z T ] 1 . The inequality in (A5) holds because both the natural logarithm and A h operator are continuous. This completes the proof. □
Proof of Lemma 3.
Assume k ( g ^ ) = exp ( ı θ g ^ ) , θ R and g ^ S I × U . By the Feynman–Kac representation theorem, we know
( u , g ^ ) = E [ k ( g ^ U ) | g ^ u = g ^ ] = E [ exp ( ı θ g ^ U ) | g ^ u = g ^ ]
is the unique bounded solution of the backward parabolic partial differential equation
0 = u ( u , g ^ ) 1 g ^ κ Υ η ( A ) L g ˜ + g ^ 2 g ^ ( u , g ^ ) 1 2 κ Υ η ( A ) L g ˜ u g ^ 2 2 g ^ 2 ( u , g ^ ) ,
for all u [ 0 , U ] , with the terminal condition of match M + 1 as ( U , g ^ ) = k ( g ^ ) = exp ( ı θ g ^ ) for all g ^ S I × U , as the characteristic function of g ^ U is Φ g ^ U ( θ ) = ( 0 , g ^ ) = E [ exp ( ı θ g ^ ) | g ^ 0 = g ^ ] . Assume is a C 2 function such that ( u , g ^ ) = exp { ı θ α ( u ) + β ( u ) } , where at the final over α ( U ) = 1 and β ( U ) = 0 . Now,
u ( u , g ^ ) = ı θ g ^ α ( u ) u + β ( u ) u ( u , g ^ ) ,
g ^ ( u , g ^ ) = ı θ α ( u ) ( u , g ^ ) ,
and
2 g ^ 2 ( u , g ^ ) = θ 2 α 2 ( u ) ( u , g ^ ) .
The results of Equations (A6) and (A7) imply
ı θ g ^ α ( u ) u 1 g ^ κ Υ η ( A ) L g ˜ + g ^ 2 α ( u ) + 1 2 θ 2 α 2 ( u ) κ Υ η ( A ) L g ˜ u g ^ 2 + β ( u ) u = 0 .
In order to maintain the zero right-hand side condition of Equation (A8) for each over u [ 0 , U ] , we must have
α ( u ) u = 1 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 α ( u )
and
β ( u ) u = 1 2 θ 2 α 2 ( u ) κ Υ η ( A ) L g ˜ u g ^ 2 .
Solving the differential equation in Equation (A9) with the final over condition α ( U ) = 1 yields
α ( u ) = exp 1 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 ( U u ) .
Hence,
β ( u ) u = 1 2 θ 2 κ Υ η ( A ) L g ˜ u g ^ 2 exp 2 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 ( U u ) ,
with the integral equation for the over-interval [ 0 , u ] given by
β ( u ) β ( 0 ) = 1 4 θ 2 κ Υ η ( A ) L g ˜ u κ Υ η ( A ) L g ˜ + g ^ 2 exp 2 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 ( U u ) exp 2 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 U .
The terminal condition β ( U ) = 0 implies
β ( 0 ) = 1 4 θ 2 κ Υ η ( A ) L g ˜ u κ Υ η ( A ) L g ˜ + g ^ 2 exp 2 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 U 1 ,
and therefore,
β ( u ) = 1 4 θ 2 κ Υ η ( A ) L g ˜ u κ Υ η ( A ) L g ˜ + g ^ 2 exp 2 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 ( U u ) 1 .
Equations (A10) and (A11) yield
( u , g ^ ) = exp ı θ exp 1 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 ( U u ) 1 4 θ 2 κ Υ η ( A ) L g ˜ u κ Υ η ( A ) L g ˜ + g ^ 2 × exp 2 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 ( U u ) 1 .
Taking u = 0 yields
( 0 , g ^ ) = Φ g ^ U ( θ ) = exp ı θ exp 1 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 U 1 4 θ 2 κ Υ η ( A ) L g ˜ u κ Υ η ( A ) L g ˜ + g ^ 2 × exp 2 g ^ 2 κ Υ η ( A ) L g ˜ + g ^ 2 U 1 .
This completes the proof. □

Appendix A.3. Proof of Proposition 1

Using Equations (16) and (3), with the zero initial condition, the Lagrangian of run dynamics over a 50-over match is
L 0 , U ( Z ) = 0 U E u i = 1 I m = 1 M exp ( ρ i m ) β i W i ( u ) Z i m ( u , w ) d u + λ ( u + d u ) [ W ( u + d u ) d u W ( u ) d u μ [ u , W ( u ) , Z ( u , w ) ] d u σ [ u , σ 2 , W ( u ) , Z ( u , w ) ] d B ( u ) ] ,
where λ is a non-negative Lagrangian multiplier [34]. Subdivide [ 0 , U ] into n equal over-intervals [ u , u + ε ] [35]. For any positive ε and normalizing constant N u > 0 , define the run transition function as
Ψ u , u + ε ( Z ) = 1 N u R I exp ε L u , u + ε ( Z ) Ψ u ( Z ) d Z ,
where Ψ u ( Z ) is the run transition probability at the beginning of u and N u 1 d Z is a finite Wiener measure such that
Ψ 0 , U ( Z ) = 1 N u n R I × n exp ε k = 1 n L u , u + ε k ( Z ) Ψ 0 ( Z ) k = 1 n d Z k ,
with the finite measure N u n k = 1 n d Z k and initial transition function Ψ 0 ( Z ) > 0 for all n N [15].
Define Δ W ( ν ) = W ( ν + d ν ) W ( ν ) , then Fubuni’s theorem implies
L u , τ ( Z ) = E u 0 U i = 1 I m = 1 M exp ( ρ i m ) β i W i ( ν ) Z i m ( ν , w ) d ν + λ [ Δ W ( ν ) d ν μ [ ν , W ( ν ) , Z ( ν , w ) ] d ν σ [ ν , σ 2 , W ( ν ) , Z ( ν , w ) ] d B ( ν ) ] ,
where τ = u + ε . As we assume the run dynamics has drift and diffusion parts, Z ( ν , w ) is an Itô process, and W is a Markov control measure of the valuation of players, there exists a smooth function g [ ν , Z ( ν , w ) ] C 0 2 ( [ 0 , 50 ] × R I × M ) such that Y ( ν ) = g [ ν , Z ( ν , w ) ] , where Y ( ν ) is an Itô process [24]. Assuming
g [ ν + Δ ν , Z ( ν , w ) + Δ Z ( ν , w ) ] = λ [ Δ W ( ν ) d ν μ [ ν , W ( ν ) , Z ( ν , w ) ] d ν σ [ ν , σ 2 , W ( ν ) , Z ( ν , w ) ] d B ( ν ) ] ,
for a very small interval around u with ε 0 , the generalized Itô’s lemma yields
ε L u , τ ( Z ) = E u i = 1 I m = 1 M ε exp ( ρ i m ) β i W i ( u ) Z i m ( u , w ) + ε g [ u , Z ( u , w ) ] + ε g u [ u , Z ( u , w ) ] + ε g Z [ u , Z ( u , w ) ] μ [ u , W ( u ) , Z ( u , w ) ] + ε g Z [ u , Z ( u , w ) ] σ [ u , σ 2 , W ( u ) , Z ( u , w ) ] Δ B ( u ) + 1 2 i = 1 I j = 1 I ε σ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] g Z i Z j [ u , Z ( u , w ) ] + o ( ε ) ,
where σ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] represents the { i , j } t h component of the variance–covariance matrix, g u = g / u , g Z = g / Z and g Z i Z j = 2 g / ( Z i Z j ) , Δ B i Δ B j = δ i j ε , Δ B i ε = ε Δ B i = 0 , and Δ Z i ( u ) Δ Z j ( u ) = ε , where δ i j is the Kronecker delta function. As E u [ Δ B ( u ) ] = 0 and E u [ o ( ε ) ] / ε 0 , for ε 0 , with the vector of initial conditions 0 I × 1 dividing throughout by ε and taking the conditional expectation we obtain
L u , τ ( Z ) = i = 1 I m = 1 M exp ( ρ i m ) β i W i ( u ) Z i m ( u , w ) + g [ u , Z ( u , w ) ] + g u [ u , Z ( u , w ) ] + g Z [ u , Z ( u , w ) ] μ [ u , W ( u ) , Z ( u , w ) ] + 1 2 i = 1 I j = 1 I σ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] g Z i Z j [ u , Z ( u , w ) ] + o ( 1 ) .
Suppose there exists a vector ξ I × 1 such that Z ( u , w ) I × 1 = Z ( τ , w ) I × 1 + ξ I × 1 . For a number 0 < η < assume | ξ | η ε [ Z T ( u , w ) ] 1 , which makes ξ a very small number for each ε 0 , and after defining the C 2 function
f [ u , W ( u ) , ξ ] = i = 1 I m = 1 M exp ( ρ i m ) β i W i ( u ) [ Z i m ( τ , w ) + ξ ] + g [ u , Z ( τ , w ) + ξ ] + g u [ u , Z ( τ , w ) + ξ ] + g Z [ u , Z ( τ , w ) + ξ ] μ [ u , W ( u ) , Z ( τ , w ) + ξ ] + 1 2 i = 1 I j = 1 I σ i j [ u , σ 2 , W ( u ) , Z ( τ , w ) + ξ ] × g Z i Z j [ u , Z ( τ , w ) + ξ ] ,
we have
Ψ u τ ( Z ) + ε Ψ u τ ( Z ) u = 1 N u Ψ u τ ( Z ) R I exp ε f [ u , W ( u ) , ξ ] d ξ + 1 N u Ψ u τ ( Z ) Z R I ξ exp ε f [ u , W ( u ) , ξ ] d ξ + o ( ε 1 / 2 ) .
For ε 0 , Δ Z 0 , and
f [ u , W ( u ) , ξ ] = f [ u , W ( u ) , Z ( τ , w ) ] + i = 1 I f Z i [ u , W ( u ) , Z ( τ , w ) ] [ ξ i Z i ( τ , w ) ] + 1 2 i = 1 I j = 1 I f Z i Z j [ u , W ( u ) , Z ( τ , w ) ] [ ξ i Z i ( τ , w ) ] [ ξ j Z j ( τ , w ) ] + o ( ε ) ,
assume there exists a symmetric, positive definite and non-singular Hessian matrix Θ I × I and a vector R I × 1 such that
Ψ u τ ( Z ) + ε Ψ u τ ( Z ) u = 1 N u ( 2 π ) I ε | Θ | exp { ε f [ u , W ( u ) , Z ( τ , w ) ] + 1 2 ε R T Θ 1 R } × Ψ u τ ( Z ) + [ Z ( τ , w ) + 1 2 ( Θ 1 R ) ] Ψ u τ ( Z ) Z + o ( ε 1 / 2 ) .
Assuming N u = ( 2 π ) I / ( ε | Θ | ) > 0 , we obtain the Wick-rotated Schrödinger-type equation as
Ψ u τ ( Z ) + ε Ψ u τ ( Z ) u = { 1 ε f [ u , W ( u ) , Z ( τ , w ) ] + 1 2 ε R T Θ 1 R } × Ψ u τ ( Z ) + [ Z ( τ , w ) + 1 2 ( Θ 1 R ) ] Ψ u τ ( Z ) Z + o ( ε 1 / 2 ) .
For any finite positive number η we know Z ( τ , w ) η ε | ξ T | 1 . Then, there exists | Θ 1 R | 2 η ε | 1 ξ T | 1 such that for ε 0 we have | Z ( τ , w ) + 1 2 Θ 1 R | η ε , for | Θ 1 R | 2 η ε | 1 ξ T | 1 , where ξ T is the transposition of ξ and differentiating Equation (A16) with respect to W i yields
W i f [ u , W ( u ) , Z ( τ , w ) ] Ψ u τ ( Z ) = 0 .
In Equation (A17), as Ψ u τ ( Z ) is a transition function Ψ u τ ( Z ) 0 . Hence, W i f [ u , W ( u ) , Z ( τ , w ) ] = 0 . We know Z ( τ , w ) = Z ( u , w ) ξ and for ξ 0 as we are looking for some stable solution; therefore, in Equation (A17) Z ( τ , w ) can be replaced by Z ( u , w ) . Therefore,
f [ u , W ( u ) , Z ( u , w ) ] = i = 1 I m = 1 M exp ( ρ i m ) β i W i ( u ) Z i m ( u , w ) + g [ u , Z ( u , w ) ] + g u [ u , Z ( u , w ) ] + g Z [ u , Z ( u , w ) ] μ [ u , W ( u ) , Z ( u , w ) ] + 1 2 i = 1 I j = 1 I σ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] g Z i Z j [ u , Z ( u , w ) ] .
Equations (A17) and (A18) imply
i = 1 I m = 1 M exp ( ρ i m ) β i Z i m ( u , w ) + g Z [ u , Z ( u , w ) ] μ [ u , W ( u ) , Z ( u , w ) ] W W W i + 1 2 i = 1 I j = 1 I g Z i Z j [ u , Z ( u , w ) ] σ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] W W W i = 0 .
Assume β i = β j = β for all i j , then
β ( Z ) = i = 1 I m = 1 M exp ( ρ i m ) Z i m ( u , w ) 1 g Z [ u , Z ( u , w ) ] μ [ u , W ( u ) , Z ( u , w ) ] W W W i + 1 2 i = 1 I j = 1 I g Z i Z j [ u , Z ( u , w ) ] σ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] W W W i .
This completes the proof. □

Appendix A.4. Proof of Proposition 2

Define a gauge γ = [ δ , ω ( δ ) ] for all possible combinations of a δ gauge in [ U ˜ , U ε ] × R 2 I × U ^ × Ω and ω ( δ ) -gauge in R 2 I × U ^ × I such that it is a cell in [ U ˜ , U ε ] × R 2 I × U ^ × Ω × R 2 I × U ^ × I , where ω ( δ ) : R 2 I × U ^ × I ( 0 , ) 2 I × U ^ × I is at least a C 1 function. The reason behind considering ω ( δ ) as a function of δ is because, after rain stops, if the u t h over proceeds then we can obtain a corresponding sample over u and the batsman has the opportunity to score runs. Let D γ be a stochastic γ -fine in cell E in [ U ˜ , U ε ] × R 2 I × U ^ × Ω × R I . For any ε > 0 and for a δ -gauge in [ U ˜ , U ε ] × R 2 I × U ^ × Ω and ω ( δ ) -gauge in R 2 I × U ^ × I choose a γ so that | 1 N u ( D γ ) h H ( R 2 I × U ^ × I ) | < 1 2 | u u | , where u = u + ε ˜ . Assume two disjoint sets E a and E b = [ u , u + ε ˜ ] × R 2 I × U ^ × Ω × { R I E a } such that E a E b = E . As the domain of f ˜ is a 2-sphere, theorem 3 in [36] implies there is a gauge γ a for set E a and a gauge γ b for set E b with γ a γ and γ b γ , so that both the gauges conform in their respective sets. For every δ -fine in [ u , u ] × R 2 I × U ^ × Ω and a positive ε ˜ = | u u | , if a γ a -fine division D γ a is of the set E a and γ b -fine division D γ b is of the set E b , then by the restriction axiom we know that D γ a D γ b is a γ -fine division of E. Furthermore, as E a E b =
1 N u D γ a D γ b h = 1 N u ( D γ a ) h + ( D γ b ) h = α + β .
Let us assume that for every δ -fine we can subdivide the set E b into two disjoint subsets E 1 b and E 2 b with their γ b -fine divisions given by D γ b 1 and D γ b 2 , respectively. Therefore, their Riemann sum can be written as β 1 = 1 N u ( D γ b 1 ) h and β 2 = 1 N u ( D γ b 2 ) h , respectively. Hence, for a small sample over-interval [ u , u ] , | α + β 1 H ( R 2 I × U ^ × I ) | 1 2 | u u | and | α + β 2 H ( R 2 I × U ^ × I ) | 1 2 | u u | . Therefore,
| β 1 β 2 | = α + β 1 H ( R 2 I × U ^ × I ) α + β 2 H ( R 2 I × U ^ × I ) α + β 1 H ( R 2 I × U ^ × I ) + α + β 2 H ( R 2 I × U ^ × I ) | u u | .
Equation (A20) implies that the Cauchy integrability of h is satisfied, and H ( R 2 I × U ^ × I ) = 1 N u R 2 I × U ^ × I h . Now, consider two disjoint sets M 1 and M 2 in R 2 I × U ^ × I such that M = M 1 M 2 with their corresponding integrals H ( M 1 ) , H ( M 2 ) , and H ( M ) . Suppose γ -fine divisions of M 1 and M 2 are given by D γ 1 and D γ 2 , respectively, with their Riemann sums for h being m 1 and m 2 . Equation (A20) implies | m 1 H ( M 1 ) | | u u | and | m 2 H ( M 2 ) | | u u | . Hence, D γ 1 D γ 2 is a γ -fine division of M. Let m = m 1 + m 2 , then Equation (A20) implies | m H ( M ) | | u u | and
| [ H ( M 1 ) + H ( M 2 ) ] H ( M ) | | m H ( M ) | + | m 1 H ( M 1 ) | + | m 2 H ( M 2 ) | 3 | u u | .
Therefore, H ( M ) = H ( M 1 ) + H ( M 2 ) and it is Stieljes. This completes the proof. □

Appendix A.5. Proof of Proposition 3

For a positive Lagrangian multiplier λ u , with initial run condition Z U ˜ , the run dynamics are expressed in Equation (26) such that Definition 7, Proposition 2, and Corollary 1 hold. Subdivide [ U ˜ , U ε ] into n equally distanced small over-intervals [ u , u ] such that ε ˜ 0 , where u = u + ε ˜ . For any positive ε ˜ and normalizing constant N u > 0 , the run transition function is
Ψ U ˜ , U ε ˜ ( Z ) = 1 ( N u ) n R 2 I × U ^ × I × n exp ε ˜ k = 1 n L u , u + ε ˜ k ( Z ) Ψ 0 ( Z ) k = 1 n d Z k ,
with finite Wiener measure N u n k = 1 n d Z k , satisfying Corollary 1 with its initial run transition function after the rain stops as Ψ U ˜ ( Z ) > 0 for all n N . Define Δ W ( ν ) = W ( ν + d ν ) W ( ν ) , then for ε ˜ 0 we have
L u , u ( Z ) = E u U ˜ U ε i = 1 I m = 1 M exp ( ρ i m ) β i W i ( ν ) Z i m ( ν , w ) d ν + λ ν [ Δ W ( ν ) d ν { μ [ ν , W ( ν ) , Z ( ν , w ) ] + exp [ φ ( ν ) 8 / 3 ] } d ν σ ^ [ ν , σ 2 , W ( ν ) , Z ( ν , w ) ] d B ( ν ) ] .
There exists a smooth function g a [ ν , Z ( ν , w ) ] C 2 [ U ˜ , U ε ] × R 2 I × U ^ × R I such that Y ( ν ) = g a [ ν , Z ( ν , w ) ] , with Y ( ν ) being an Itô’s process [37]. Assume
g a ν + Δ ν , Z ( ν , w ) + Δ Z ( ν , w ) = λ ν [ Δ W ( ν ) d ν { μ [ ν , W ( ν ) , Z ( ν , w ) ] + exp [ φ ( ν ) 8 / 3 ] } d ν σ ^ [ ν , σ 2 , W ( ν ) , Z ( ν , w ) ] d B ( ν ) ] .
For a very small sample over-interval around u with ε ˜ 0 , the generalized Itô’s lemma gives
L u , u ( Z ) = i = 1 I m = 1 M exp ( ρ i m ) β i W i ( u ) Z i m ( u , w ) + g a [ u , Z ( u , w ) ] + g u a [ u , Z ( u , w ) ] + g Z a [ u , Z ( u , w ) ] { μ [ u , W ( u ) , Z ( u , w ) ] + exp [ φ ( u ) 8 / 3 ] } + 1 2 i = 1 I j = 1 I σ ^ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] g Z i Z j a [ u , Z ( u , w ) ] + o ( 1 ) ,
where σ ^ i j u , σ 2 , W ( u ) , Z ( u , w ) represents the { i , j } t h component of the variance–covariance matrix, g u a = g a / u , g Z a = g a / Z , g Z i Z j a = 2 g a / ( Z i Z j ) , Δ B i Δ B j = δ i j ε ˜ , Δ B i ε ˜ = ε ˜ Δ B i = 0 , and Δ Z i ( u ) Δ Z j ( u ) = ε ˜ , where δ i j is the Kronecker delta function. There exists a vector ξ ( 2 I × U ^ × I ) × 1 such that Z ( u , w ) ( 2 I × U ^ × I ) × 1 = Z ( u , w ) ( 2 I × U ^ × I ) × 1 + ξ ( 2 I × U ^ × I ) × 1 . Assuming | ξ | η ε ˜ [ Z T ( u , w ) ] 1 and defining a C 2 function
f a [ u , W ( u ) , ξ ] = i = 1 I m = 1 M exp ( ρ i m ) β i W i ( u ) [ Z i m ( u , w ) + ξ ] + g a [ u , Z ( u , w ) + ξ ] + g u a [ u , Z ( u , w ) + ξ ] + g Z a [ u , Z ( u , w ) + ξ ] { μ [ u , W ( u ) , Z ( u , w ) + ξ ] + exp [ φ ( u ) 8 / 3 ] } + 1 2 i = 1 I j = 1 I σ ^ i j [ u , σ 2 , W ( u ) , Z ( u , w ) + ξ ] × g Z i Z j a [ u , Z ( u , w ) + ξ ] ,
we obtain
Ψ u ( Z ) + ε ˜ Ψ u ( Z ) u = Ψ u ( Z ) N u R 2 I × U ^ × I exp { ε ˜ f a [ u , W ( u ) , ξ ] } d ξ + 1 N u Ψ u ( Z ) Z R 2 I × U ^ × I ξ exp { ε ˜ f a [ u , W ( u ) , ξ ] } d ξ + o ( ε ˜ 1 / 2 ) .
For ε ˜ 0 , Δ Z 0 ,
f a [ u , W ( u ) , ξ ] = f a [ u , W ( u ) , Z ( u , w ) ] + i = 1 I f Z i a [ u , W ( u ) , Z ( u , w ) ] [ ξ i Z i ( u , w ) ] + 1 2 i = 1 I j = 1 I f Z i Z j a [ u , W ( u ) , Z ( u , w ) ] [ ξ i Z i ( u , w ) ] [ ξ j Z j ( u , w ) ] + o ( ε ˜ ) .
There exists a symmetric, positive definite and non-singular Hessian matrix Θ [ 2 I × U ^ × I ] × [ 2 I × U ^ × I ] , and a vector R ( 2 I × U ^ × I ) × 1 such that
Ψ u ( Z ) + ε ˜ Ψ u ( Z ) u = 1 N u ( 2 π ) 2 I × U ^ × I ε ˜ | Θ | exp { ε ˜ f a [ u , W ( u ) , Z ( u , w ) ] + 1 2 ε ˜ R T Θ 1 R } × Ψ u ( Z ) + [ Z ( u , w ) + 1 2 ( Θ 1 R ) ] Ψ u ( Z ) Z + o ( ε ˜ 1 / 2 ) .
Assuming N u = ( 2 π ) 2 I × U ^ × I / ε | Θ | > 0 , we obtain the Wick-rotated Schrödinger-type equation as
Ψ u ( Z ) + ε ˜ Ψ u ( Z ) u = { 1 ε ˜ f a [ u , W ( u ) , Z ( u , w ) ] + 1 2 ε ˜ R T Θ 1 R } × Ψ u ( Z ) + [ Z ( u , w ) + 1 2 ( Θ 1 R ) ] Ψ u ( Z ) Z + o ( ε ˜ 1 / 2 ) .
As Z ( τ , w ) η ε ˜ | ξ T | 1 , there exists | Θ 1 R | 2 η ε ˜ | 1 ξ T | 1 such that for ε ˜ 0 we have | Z ( u , w ) + 1 2 Θ 1 R | η ε ˜ , | Θ 1 R | 2 η ε ˜ | 1 ξ T | 1 and
W i f a [ u , W ( u ) , Z ( u , w ) ] = 0 .
We know that Z ( u , w ) = Z ( u , w ) ξ and for ξ 0 as we are looking for some stable solution. Hence, Z ( u , w ) can be replaced by Z ( u , w ) and
i = 1 I m = 1 M exp ( ρ i m ) β i Z i m ( u , w ) + g Z a [ u , Z ( u , w ) ] { μ [ u , W ( u ) , Z ( u , w ) ] + exp [ φ ( u ) 8 / 3 ] } W W W i + 1 2 i = 1 I j = 1 I σ ^ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] W W W i g Z i Z j a [ u , Z ( u , w ) ] = 0 .
If β i = β j = β for all i j , then
β ( Z ) = i = 1 I m = 1 M exp ( ρ i m ) Z i m ( u , w ) 1 × g a [ u , Z ( u , w ) ] Z { μ [ u , W ( u ) , Z ( u , w ) ] + exp [ φ ( u ) 8 / 3 ] } W W W i + 1 2 i = 1 I j = 1 I σ ^ i j [ u , σ 2 , W ( u ) , Z ( u , w ) ] W W W i 2 g a [ u , Z ( u , w ) ] Z i Z j .
This completes the proof. □

References

  1. Pramanik, P. Optimization of market stochastic dynamics. Oper. Res. Forum 2020, 1, 31. [Google Scholar] [CrossRef]
  2. Lasry, J.M.; Lions, P.L. Mean field games. Jpn. J. Math. 2007, 2, 229–260. [Google Scholar] [CrossRef]
  3. Duckworth, F.C.; Lewis, A.J. A fair method for resetting the target in interrupted one-day cricket matches. J. Oper. Res. Soc. 1998, 49, 220–227. [Google Scholar] [CrossRef]
  4. Clarke, S.R. Dynamic programming in one-day cricket-optimal scoring rates. J. Oper. Res. Soc. 1988, 39, 331–337. [Google Scholar]
  5. Johnston, M.I.; Clarke, S.R.; Noble, D.H. Assessing player performance in one-day cricket using dynamic programming. Asia Pac. J. Oper. Res. 1993, 10, 45–55. [Google Scholar]
  6. Clarke, S.R.; Norman, J.M. Dynamic programming in cricket: Protecting the weaker batsman. Asia Pac. J. Oper. Res. 1998, 15, 93–108. [Google Scholar]
  7. Preston, I.; Thomas, J. Batting strategy in limited overs cricket. J. R. Stat. Soc. Ser. D Stat. 2000, 1, 95–106. [Google Scholar] [CrossRef]
  8. Scarf, P.; Shi, X. Modelling match outcomes and decision support for setting a final innings target in test cricket. IMA J. Manag. Math. 2005, 16, 161–178. [Google Scholar] [CrossRef]
  9. Norton, P.; Phatarfod, R. Optimal strategies in one-day cricket. Asia Pac. J. Oper. Res. 2008, 25, 495–511. [Google Scholar] [CrossRef]
  10. Kappen, H.J. An introduction to stochastic control theory, path integrals and reinforcement learning. In Proceedings of the AIP Conference Proceedings, Granada, Spain, 11–15 September; 2006 American Institute of Physics: New York, NY, USA, 2007; Volume 887, pp. 149–181. [Google Scholar]
  11. Feynman, R.P. Space-time approach to quantum electrodynamics. Phys. Rev. 1949, 76, 769. [Google Scholar] [CrossRef]
  12. de Wit, B.; Smith, J. Field Theory in Particle Physics; Elsevier: Amsterdam, The Netherlands, 2012; Volume 1. [Google Scholar]
  13. Baaquie, B.E. A path integral approach to option pricing with stochastic volatility: Some exact results. J. De Phys. I 1997, 7, 1733–1753. [Google Scholar] [CrossRef]
  14. Baaquie, B.E. Quantum Finance: Path Integrals and Hamiltonians for Options and Interest Rates; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  15. Fujiwara, D. Rigorous Time Slicing Approach to Feynman Path Integrals; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  16. Kappen, H.J. Path integrals and symmetry breaking for optimal control theory. J. Stat. Mech. Theory Exp. 2005, 2005, P11011. [Google Scholar] [CrossRef]
  17. Yang, I.; Morzfeld, M.; Tomlin, C.J.; Chorin, A.J. Path integral formulation of stochastic optimal control with generalized costs. IFAC Proc. Vol. 2014, 47, 6994–7000. [Google Scholar] [CrossRef]
  18. Theodorou, E.; Buchli, J.; Schaal, S. Reinforcement learning of motor skills in high dimensions: A path integral approach. In Proceedings of the Robotics and Automation (ICRA), Anchorage, Alaska, 3–8 May 2010; pp. 2397–2403. [Google Scholar]
  19. Pramanik, P. Scoring a goal optimally in a soccer game under Liouville-like quantum gravity action. Oper. Res. Forum 2023, 4, 1–39. [Google Scholar]
  20. Marcet, A.; Marimon, R. Recursive contracts. Econometrica 2019, 87, 1589–1631. [Google Scholar] [CrossRef]
  21. Ljungqvist, L.; Sargent, T.J. Recursive Macroeconomic Theory; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  22. Pramanik, P. Path integral control of a stochastic multi-risk SIR pandemic model. Theory Biosci. 2023, 142, 107–142. [Google Scholar] [CrossRef] [PubMed]
  23. Melville, T. Cricket for Americans: Playing and Understanding the Game; Popular Press of Bowling Green State: Bowling Green, KY, USA, 1993. [Google Scholar]
  24. ksendal, B. Stochastic differential equations. In Stochastic Differential Equations; Springer: Berlin/Heidelberg, Germany, 2003; pp. 65–84. [Google Scholar]
  25. Govindan, T. Yosida Approximations of Stochastic Differential Equations in Infinite Dimensions and Applications; Springer: Berlin/Heidelberg, Germany, 2016; Volume 79. [Google Scholar]
  26. Falconer, K. Fractal Geometry: Mathematical Foundations and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  27. Schramm, O. Scaling limits of loop-erased random walks and uniform spanning trees. In Selected Works of Oded Schramm; Springer: Berlin/Heidelberg, Germany, 2011; pp. 791–858. [Google Scholar]
  28. Koch, H. Une méthode géométrique élémentaire pour l’étude de certaines questions de la théorie des courbes planes. Acta Math. 1906, 30, 145–174. [Google Scholar] [CrossRef]
  29. Kurtz, D.S.; Swartz, C.W. Theories of Integration: The Integrals of Riemann, Lebesgue, Henstock-Kurzweil, and Mcshane; World Scientific Publishing Company: Singapore, 2004; Volume 9. [Google Scholar]
  30. Duplantier, B.; Sheffield, S. Liouville quantum gravity and KPZ. Invent. Math. 2011, 185, 333–393. [Google Scholar] [CrossRef]
  31. Sheffield, S. Conformal weldings of random surfaces: SLE and the quantum gravity zipper. Ann. Probab. 2016, 44, 3474–3545. [Google Scholar] [CrossRef]
  32. Gwynne, E.; Miller, J. Metric gluing of Brownian and 8/3-Liouville quantum gravity surfaces. arXiv 2016, arXiv:1608.00955. [Google Scholar] [CrossRef]
  33. CRICSHEET. Freely-Available Structured Data for Cricket, Including Ball-by-Ball Data International and T20 League Cricket Matches, and Identifier (Register) Mapping for People Involved in Cricket. 2023. Available online: https://cricsheet.org (accessed on 16 June 2023).
  34. Pramanik, P. Effects of water currents on fish migration through a Feynman-type path integral approach under 8/3 Liouville-like quantum gravity surfaces. Theory Biosci. 2021, 140, 205–223. [Google Scholar] [CrossRef] [PubMed]
  35. Pramanik, P. Optimization of Dynamic Objective Functions Using Path Integrals. Ph.D. Thesis, Northern Illinois University, DeKalb, IL, USA, 2021. [Google Scholar]
  36. Muldowney, P. A Modern Theory of Random Variation; Wiley Online Library: New York, NY, USA; Hoboken, NJ, USA, 2012. [Google Scholar]
  37. Pramanik, P.; Polansky, A.M. Semicooperation under curved strategy spacetime. J. Math. Sociol. 2023, 48, 1–35. [Google Scholar] [CrossRef]
Figure 1. Runs approximation of one-day matches with μ 1 = 0.0009843 , μ 2 = 0.8563 , and σ = 1.288 .
Figure 1. Runs approximation of one-day matches with μ 1 = 0.0009843 , μ 2 = 0.8563 , and σ = 1.288 .
Mathematics 12 02739 g001
Figure 2. The actual run dynamics of the last six one-day internationals played by India.
Figure 2. The actual run dynamics of the last six one-day internationals played by India.
Mathematics 12 02739 g002
Figure 3. Simulation of last one-day match with μ 1 = 0.000674 , μ 2 = 0.8567 , and σ = 1.22636 .
Figure 3. Simulation of last one-day match with μ 1 = 0.000674 , μ 2 = 0.8567 , and σ = 1.22636 .
Mathematics 12 02739 g003
Figure 4. Relationship between coefficient of control and total number of balls delivered.
Figure 4. Relationship between coefficient of control and total number of balls delivered.
Mathematics 12 02739 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pramanik, P.; Polansky, A.M. Motivation to Run in One-Day Cricket. Mathematics 2024, 12, 2739. https://doi.org/10.3390/math12172739

AMA Style

Pramanik P, Polansky AM. Motivation to Run in One-Day Cricket. Mathematics. 2024; 12(17):2739. https://doi.org/10.3390/math12172739

Chicago/Turabian Style

Pramanik, Paramahansa, and Alan M. Polansky. 2024. "Motivation to Run in One-Day Cricket" Mathematics 12, no. 17: 2739. https://doi.org/10.3390/math12172739

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop