Next Article in Journal
Stability Analysis for Time-Delay Systems via a New Negativity Condition on Quadratic Functions
Previous Article in Journal
Optimized Deep-Learning-Based Method for Cattle Udder Traits Classification
Previous Article in Special Issue
State and Control Path-Dependent Stochastic Zero-Sum Differential Games: Viscosity Solutions of Path-Dependent Hamilton–Jacobi–Isaacs Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Harvesting of Stochastically Fluctuating Populations Driven by a Generalized Logistic SDE Growth Model

1
ISEG-School of Economics and Management, Universidade de Lisboa, 1649-004 Lisbon, Portugal
2
REM—Research in Economics and Mathematics, CEMAPRE, Rua do Quelhas, 6, Gabinete 503, 1200-781 Lisboa, Portugal
Mathematics 2022, 10(17), 3098; https://doi.org/10.3390/math10173098
Submission received: 13 July 2022 / Revised: 19 August 2022 / Accepted: 26 August 2022 / Published: 29 August 2022
(This article belongs to the Special Issue Stochastic Processes and Their Applications)

Abstract

:
We describe the growth dynamics of a stock using stochastic differential equations with a generalized logistic growth model which encompasses several well-known growth functions as special cases. For each model, we compute the optimal variable effort policy and compare the expected net present value of the total profit earned by the harvester among policies. In addition, we further extend the study to include parameters sensitivity, such as the costs and volatility, and present an explicitly Crank–Nicolson discretization scheme necessary to obtain optimal policies.

1. Introduction

Stochastic differential equations (SDEs) can be applied to model several physical, mechanical, biological, economic, financial, and social events. In [1], readers can find innovative research dedicated to the study of a stock’s growth dynamic under a randomly varying environment in order to maximize the profit from harvesting. Such policies are usually designed to maximize the expected net present value of the total profit over a finite time horizon T. While policies considering infinite time have been studied, they are not be discussed here. Because stock size is affected by fishing effort, it seems natural to consider a controlled effort and use optimal control techniques to maximize profit, which is then discounted by a social rate. In [2], the authors provide a comprehensive account of optimal harvesting policies for profit optimization in a deterministic environment. Under general assumptions, except for near the end of a finite time horizon, T, the optimal policy consists of harvesting with the highest possible effort when the stock size exceeds a previously determined threshold and stopping harvesting when the stock is below that level. After the stock has reached the threshold size, the harvesting rate must be kept constant at an equilibrium value in order for the stock to remain at that size. When the stock size falls below the threshold, harvesting should be halted until the stock size reaches the threshold, which may take several years.
Stochastic optimal control methods have been applied to design optimal harvesting policies in a randomly varying environment (see for instance [3,4,5]). The optimal policy is the same as in the deterministic case. However, because of random fluctuations in the environment, the stock size continues to fluctuate. The harvesting effort must be adjusted at every instant to ensure that the stock’s size does not exceed the equilibrium value. Adjustments to the harvesting effort at every instant are incompatible with the logistics of fishing. Aside from harvesting absences, which have negative social and economic consequences (such as unemployment and company subsidies), these policies involve knowledge of stock size at all times in order to determine the appropriate level of effort. The current method of estimating the size of a stock is difficult, costly, time-consuming, and inaccurate.
One possible way to overcome these social and economic problems is to apply an optimal policy with variable effort subject to a penalization structure on the control, that is, to incorporate in the model a term that represents a running energy cost based on effort. This extra cost term represents a method of penalizing profit values when the effort abruptly changes from a reference value at each time instant. In [6,7,8], references therein, and forthcoming papers from the same authors, readers can see several examples of the application of such a penalized policy. In [6,9,10,11,12], the authors proposed a constant effort optimal sustainable policy, taking into account the Gompertz and logistic growth models (with and without Allee effects) to avoid social and economic problems as well as to estimate the current stock size at each time instant. That policy, however, implies a profit reduction, which they show to be slight for the models and data considered.
Optimal variable effort policies have been studied considering, among others, a logistic growth model, as in [3,4,5], or the Gompertz model, as in [6,10]. An optimal harvesting policy with a slight generalization of the logistic model, the Pella–Tomlinson model, can be found in [13].
This paper extends all of the above by considering a truly generalized logistic (GL) model in terms of stock growth models. From the GL model, we can obtain each of the growth models mentioned above. We note that each special case retains the sigmoidal and asymptotic properties of the Verhulst logistic curve (as in [14]). We obtain the optimal variable effort policy and study the effects of parameter sensitivity. The rest of this paper is structured as follows: in Section 2, we present the model formulation and the Crank–Nicolson discretization scheme, then use a dynamic programming method to solve the optimization variable effort problem. Numerical experiments involving several stock growth models and parameter sensitivities are discussed in Section 3. Finally, in Section 4 we provide closing remarks.

2. Optimal Policy with Variable Effort

2.1. Generalized Logistic Growth Model

The logistic growth model was introduced by Verhulst [15] in 1838. After that, according to [14], several authors have derived other growth models based on the logistic model. The generalized logistic (GL) growth model considered here can be found in [14] and is provided by the ordinary differential equation (ODE):
d X ( t ) = r X a ( t ) 1 X ( t ) K b c d t ,
with positive parameters a , b and c. While negative parameter values can be considered, the risk of losing biological meaning is high. The GL model retains the logistic model’s most important properties, namely, the existence of a horizontal asymptote at X = K , the possibility of reaching a maximum value, and the presence of an inflection point. Several other properties and features can be found in [14]. In Equation (1), X ( t ) represents the stock size at time instant t, r is the population intrinsic growth rate, and K stands for the carrying capacity (sometimes known as the saturation level).
In the presence of harvesting and under a stochastic environment, the stock dynamic can be modeled by the SDE:
d X ( t ) = r X a ( t ) 1 X ( t ) K b c d t q E ( t ) X ( t ) d t + σ X ( t ) d W ( t ) , X ( 0 ) = x ,
where X ( t ) , r and K are defined above, q > 0 is the catchability coefficient, E ( t ) 0 is the harvesting effort (a Markov control), σ > 0 measures the strength of environmental fluctuations, W ( t ) is a standard Wiener process (see for instance [16]), and x > 0 represents the stock size at the initial time 0. Equation (2) is an Itô autonomous SDE and its solution, X ( t ) , is a homogeneous diffusion process with drift coefficient r X a 1 X K b c q E X and diffusion coefficient σ 2 X 2 . In [16,17], after a few adaptations, it is possible to find conditions that allow for a unique solution to Equation (2), provided that the drift and σ X ( t ) satisfy certain regularity and growth conditions. Hence, it is sufficient to have 0 E < r q 1 σ 2 2 r , where E ( t ) E during a small time interval [ t , t + Δ t ] (as in Appendix A).
Two quantities of special interest in harvesting are the yield per unit time Y ( t ) = q E ( t ) X ( t ) and the fishing mortality rate, F ( t ) = q E ( t ) .
For a triplet ( a , b , c ) , we denote Equation (2) by G L ( a , b , c ) , i.e., the generalized logistic growth model with parameters a , b , and c. For instance, G L ( 1 , 1 , 1 ) corresponds to the well known logistic model. Figure 1 shows several curves for the stock growth rate d X ( t ) / d t as a function of a stock size X ( t ) for several particular cases of the GL model (a deterministic case without harvesting). From [14], and after a few simplifications, we can observe the following well known models: the G L ( 1 , 1 , 1 ) logistic model, G L ( 1 , 1 , 2 ) Blumberg’s model (particular case), the G L ( 1 , 2 , 1 ) and G L ( 1 , 2 , 2 ) Richards models, and G L ( 1 , 2 , 3 ) , a particular case of a GL model. Other forms of G L ( a , b , c ) correspond to models that are not specified here, such as the Generalized von Bertalanffy model ( G L ( a , 1 a , 1 ) ), the Generalized Gompertz model ( G L ( 1 , b 0 , c ) ), and the Hyperbolic model ( G L ( 1 1 / n , 1 , 1 + 1 / n ) ).

2.2. Optimal Policy

A stochastic optimal control problem (SOCP) is the process of determining an optimal policy with variable effort based on profit optimization. The profit net value earned by the harvester per unit time, Π ( t ) , can be defined as the difference between sales revenues per unit time ( R ( t ) = ( p 1 p 2 H ( t ) ) H ( t ) ( p 1 > 0 , p 2 0 ) ) and harvesting costs per unit time ( C ( t ) = ( c 1 + c 2 E ( t ) ) E ( t ) ( c 1 > 0 , c 2 > 0 ) ), i.e.,
Π ( t ) = R ( t ) C ( t ) = ( p 1 q X ( t ) c 1 ) E ( t ) ( p 2 q 2 X 2 ( t ) + c 2 ) E 2 ( t ) .
To derive a well-posed SOCP in a finite time, we assume that harvesting occurs within the time interval [ 0 , T ] . Future harvester profits are discounted at a rate of δ > 0 , for example, to accounting for currency depreciation and opportunity costs. The optimal harvesting policy maximizes the expected net present value of the total profit. The maximized value of the expected total discounted profit within the interval [ t , T ] , denoted by J * ( X ( t ) , t ) , is
J * ( X ( t ) , t ) = max E ( τ ) t τ T E t , x t T e δ ( τ t ) Π ( τ ) d τ ,
for which we use the short notation E [ | X ( t ) = y ] = E t , y [ ] . Finally, the SOCP consists in obtaining the maximized value for the interval [ 0 , T ] :
J * J * ( x , 0 ) = max E ( τ ) 0 τ T E 0 , x 0 T e δ τ Π ( τ ) d τ ,
considering E ( t ) as a control subject to the population dynamics provided in Equation (2), the control restrictions 0 E m i n E ( t ) E m a x < , and the terminal condition J ( X ( T ) , T ) = 0 .
To obtain the maximized Hamilton–Jacobi–Bellman (HJB) equation, the following SOCP is solved using stochastic dynamic programming (see pp. 259–268 in [18] along with [19,20]), as detailed in Appendix A:
J * ( X ( t ) , t ) t = p 1 q X ( t ) c 1 ( p 2 q 2 X 2 ( t ) + c 2 ) E * ( t ) E * ( t ) δ J * ( X ( t ) , t ) + J * ( X ( t ) , t ) X ( t ) r X a ( t ) 1 X ( t ) K b c q E * ( t ) X ( t ) + 1 2 2 J * ( X ( t ) , t ) X 2 ( t ) σ 2 X 2 ( t ) ,
which represents the expected total discounted profit. The optimal variable effort is
E * ( t ) = E m i n , if E u * ( t ) < E m i n E u * ( t ) , if E m i n E u * ( t ) E m a x E m a x , if E u * ( t ) > E m a x ,
where
E u * ( t ) = p 1 J * ( X ( t ) , t ) X ( t ) q X ( t ) c 1 2 p 2 q 2 X ( t ) 2 + c 2
is the unconstrained effort (as in [21]), which can be denoted by the optimal variable effort as well.

2.3. Domain Discretization and Finite Difference Approximation

The HJB Equation (4) is a nonlinear parabolic PDE involving a temporal variable and a spatial variable. Unfortunately, it is impossible to solve it analytically, and thusu we resort to numerical methods and apply a Crank–Nicolson discretization scheme based on finite differences. This problem’s natural domain is [ 0 , T ] × R ; however, to obtain a numerical solution it is necessary to define a bounded computational domain. As a result, we define the bounded domain as [ 0 , T ] × [ 0 , x m a x ] with x m a x such that the probability of X ( t ) exceeding x m a x is negligible. Moreover, we consider uniform grids in space and time domains as follows:
Mathematics 10 03098 i001
The partitions for space and time are
x i = x 0 + i Δ x , i = 1 , , m , Δ x = x m a x / m ,
t j = t 0 + j Δ t , j = 1 , , n , Δ t = T / n .
They form a grid of points where J i , j * : = J * ( x i , t j ) and E i , j * : = E * ( x i , t j ) , with 0 i m and 0 j n . Because the boundary condition J * ( X ( T ) , T ) = 0 is terminal rather than initial, the computation uses time to move backwards from T to 0.
As in [22], the space derivatives are approximated by central differences obtained by averaging the regressive and progressive approximations (for 0 j n 1 ):
J i , j * x 1 2 J i + 1 , j + 1 * J i 1 , j + 1 * 2 Δ x + J i + 1 , j * J i 1 , j * 2 Δ x , 1 i m 1 , 2 J i , j * x 2 1 2 J i + 1 , j + 1 * 2 J i , j + 1 * + J i 1 , j + 1 * Δ x 2 + J i + 1 , j * 2 J i , j * + J i 1 , j * Δ x 2 , 1 i m 1 , J m , j * x 1 2 3 J m , j + 1 * 4 J m 1 , j + 1 * + J m 2 , j + 1 * 2 Δ x + 3 J m , j * 4 J m 1 , j * + J m 2 , j * 2 Δ x , i = m ,
and
2 J m , j * x 2 1 2 3 J m , j + 1 * 7 J m 1 , j + 1 * + 5 J m 2 , j + 1 * J m 3 , j + 1 * Δ x 2 + 1 2 3 J m , j * 7 J m 1 , j * + 5 J m 2 , j * J m 3 , j * Δ x 2 , i = m .
The time derivative is approximated using a two time instant scheme:
J i , j * t J i , j + 1 * J i , j * Δ t , 0 i m , 0 j n 1 .

2.4. Numerical Solution

Using the above approximations, the discretized version of the HJB Equation (4) is (for 0 j n 1 )
J i , j + 1 * J i , j * Δ t = ( p 1 q x i c 1 ) E i , j + 1 * ( p 2 q 2 x i 2 + c 2 ) E i , j + 1 * 2 δ J i , j + 1 * 2 + J i , j * 2 + 1 2 J i + 1 , j + 1 * J i 1 , j + 1 * 2 Δ x + J i + 1 , j * J i 1 , j * 2 Δ x f ( x i ) q E i , j + 1 * x i + 1 4 J i + 1 , j + 1 * 2 J i , j + 1 * + J i 1 , j + 1 * Δ x 2 + J i + 1 , j * 2 J i , j * + J i 1 , j * Δ x 2 σ 2 x i 2 , for   1 i m 1 ,
and
J m , j + 1 * J m , j * Δ t = ( p 1 q x m c 1 ) E m , j + 1 * ( p 2 q 2 x m 2 + c 2 ) E m , j + 1 * 2 δ J m , j + 1 * 2 + J m , j * 2 + 1 2 3 J m , j + 1 * 4 J m 1 , j + 1 * + J m 2 , j + 1 * 2 Δ x + 3 J m , j * 4 J m 1 , j * + J m 2 , j * 2 Δ x × f ( x m ) q E m , j + 1 * x m + 1 4 ( 3 J m , j + 1 * 7 J m 1 , j + 1 * + 5 J m 2 , j + 1 * J m 3 , j + 1 * Δ x 2 + 3 J m , j * 7 J m 1 , j * + 5 J m 2 , j * J m 3 , j * Δ x 2 ) σ 2 x m 2 , i = m ,
with f ( x i ) = r x i a 1 1 x i K b c , i = 1 , , m in both expressions.
The discretized version of the unconstrained optimal effort (5) is
E u , i , j * = p 1 J i , j * x q x i c 1 2 p 2 q 2 x i 2 + c 2 = p 1 1 2 J i + 1 , j + 1 * J i 1 , j + 1 * 2 Δ x + J i + 1 , j * J i 1 , j * 2 Δ x q x i c 1 2 p 2 q 2 x i 2 + c 2 ,
for 1 i m 1 , and
E u , m , j * = p 1 J m , j * x q x m c 1 2 p 2 q 2 x m 2 + c 2 = p 1 1 2 3 J m , j + 1 * 4 J m 1 , j + 1 * + J m 2 , j + 1 * 2 Δ x + 3 J m , j * 4 J m 1 , j * + J m 2 , j * 2 Δ x q x m c 1 2 p 2 q 2 x m 2 + c 2 ,
for i = m .
The discretized version of the HJB equation can be written as a system of m equations (for 0 j n 1 )
( f ( x i ) q E i , j + 1 * ) x i Δ t 4 Δ x σ 2 x i 2 Δ t 4 Δ x 2 J i 1 , j * + 1 + δ Δ t 2 + σ 2 x i 2 Δ t 2 Δ x 2 J i , j * ( f ( x i ) q E i , j + 1 * ) x i Δ t 4 Δ x + σ 2 x i 2 Δ t 4 Δ x 2 J i + 1 , j * = ( f ( x i ) q E i , j + 1 * ) x i Δ t 4 Δ x + σ 2 x i 2 Δ t 4 Δ x 2 J i 1 , j + 1 * + 1 δ Δ t 2 σ 2 x i 2 Δ t 2 Δ x 2 · J i , j + 1 * + ( f ( x i ) q E i , j + 1 * ) x i Δ t 4 Δ x + σ 2 x i 2 Δ t 4 Δ x 2 J i + 1 , j + 1 * + ( p 1 q x i c 1 ) E i , j + 1 * Δ t ( p 2 q 2 x i 2 + c 2 ) E i , j + 1 * 2 Δ t , 1 i m 1 ,
and
σ 2 x m 2 Δ t 4 Δ x 2 · J m 3 , j * ( f ( x m ) q E m , j + 1 * ) x m Δ t 4 Δ x + 5 σ 2 x m 2 Δ t 4 Δ x 2 · J m 2 , j * + ( f ( x m ) q E m , j + 1 * ) x m Δ t Δ x + 7 σ 2 x m 2 Δ t 4 Δ x 2 · J m 1 , j * + 1 + δ Δ t 2 3 ( f ( x m ) q E m , j + 1 * ) x m Δ t 4 Δ x 3 σ 2 x m 2 Δ t 4 Δ x 2 · J m , j * = σ 2 x m 2 Δ t 4 Δ x 2 · J m 3 , j + 1 * + ( f ( x m ) q E m , j + 1 * ) x m Δ t 4 Δ x + 5 σ 2 x m 2 Δ t 4 Δ x 2 · J m 2 , j + 1 * ( f ( x m ) q E m , j + 1 * ) x m Δ t Δ x + 7 σ 2 x m 2 Δ t 4 Δ x 2 · J m 1 , j + 1 * + 1 δ Δ t 2 + 3 ( f ( x m ) q E m , j + 1 * ) x m Δ t 4 Δ x + 3 σ 2 x m 2 Δ t 4 Δ x 2 · J m , j + 1 * + ( p 1 q x m c 1 ) E m , j + 1 * Δ t ( p 2 q 2 x m 2 + c 2 ) E m , j + 1 * 2 Δ t , i = m .
The system can be written using the appropriate matrices A, B, and C in the form
A J * = B J + * + C ,
with
J * = J 0 * | J 1 * | | J n 1 * , J + * = J 1 * | J 2 * | | J n * , and
J j * = J 0 , j * J 1 , j * J m , j * T , 0 j n ,
where T is the transpose operator. Solving the system, we obtain the optimal solution for the grid points. Polynomial interpolation between the values at the neighbouring points of X in the partition x 0 , x 1 , , x m yields the optimal solution when the stock is at a given value X at a time t k .

3. Results

To compute J * , that is, the profit earned by the harvester during the interval [ 0 , T ] , we performed 1000 Monte Carlo simulations for the stock, the effort, and the profit. The stock dynamics were simulated based on a Euler scheme. In [2,21], we discovered a fairly complete set of parameter values (namely, r , K , q , p 1 , p 2 , c 1 , and c 2 ) for the Pacific halibut (Hippoglossus hippoglossus). The other parameters ( E m i n , E m a x , σ , x , δ , n , m and x m a x ), for which we had no data, were empirically chosen to be reasonable, and the time horizon was set at T = 25 years. It is important to keep in mind that n and m were chosen from a pool of several trials in order for the algorithm to reach convergence. We were unable to estimate any parameters due to the lack of historical data, namely, stock size and harvesting effort. Table 1 contains the complete set of parameter values.
The models chosen to obtain the optimal policies were the ones referenced before as particular cases of the GL growth model in Section 2.1. For each one, we ran 1000 simulations to obtain the optimal effort and optimal profit. The results are listed in Table 2.
It can be observed from Table 2 that the highest profit, USD 565.456 M , comes from the application of model G L ( 1 , 2 , 1 ) , followed by models G L ( 1 , 2 , 2 ) , G L ( 1 , 1 , 1 ) , G L ( 1 , 2 , 3 ) , and G L ( 1 , 1 , 2 ) , in decreasing order. This is not a surprise, as G L ( 1 , 2 , 1 ) is, according to Figure 1, the one with a higher growth rate. Model G L ( 1 , 1 , 2 ) yields the lowest profit value, USD 214.864 M , and is the model with the lowest growth rate, as seen in Figure 1. In terms of standard deviation, the highest value is related to model G L ( 1 , 2 , 1 ) and the lowest is related to model G L ( 1 , 2 , 3 ) . The remaining models have identical values.
For the models presented in Table 2, Figure 2, Figure 3 and Figure 4 depict what could happen when applying the optimal harvesting policy to such growth models. The thin lines represent one possible reality in each image that a harvester would experience (one of the 1000 simulations). The thicker lines represent the mean of 1000 simulations, which is a good approximation of the expected value appearing in the expression (3). For each model, a trajectory and sample mean for the stock size X ( t ) , yield Y ( t ) = q E ( t ) X ( t ) , fishing mortality F ( t ) = q E ( t ) , and optimal profit per unit time can be found. Conclusions based on Figure 2, Figure 3 and Figure 4 are similar to the results presented in Table 2. Indeed, model G L ( 1 , 2 , 1 ) shows the largest stock size compared with other models, implying higher fishing mortality.

Sensitivity Analysis

We now examine the implications of changing the values of δ , c 1 , c 2 , and σ from the ones in Table 2 used for the simulations. To perform a sensitivity analysis, we chose the G L ( 1 , 2 , 3 ) model and ran simulations identical the the ones before while changing each parameter one at a time. A summary of the alternative parameter values and the obtained profit per unit time is provided in Table 3.
Increasing the value of δ has a large effect on the profit value and implies a decrease of about 50 % . This is to be expected, as the increase in δ represents a depreciation in the value of the currency. A greater depreciation ( δ = 0.15 ) implies a greater loss of profit. The initial value of c 1 is shallow, and thus the increase to 96 × 10 4 is negligible. However, increasing that value to 96 × 10 2 reduces the profit, as expected. An increase in c 2 has implications for profit reduction, and may imply a 50 % profit reduction. We note that c 2 represents unexpected costs (unspecialized gear and/or vessels) typically associated with extra costs. The influence of stochastic fluctuations is analysed by changing values of σ . A rise in stochastic fluctuations, i.e., σ = 0.30 or σ = 0.45 , has a huge influence on profit. The values of the standard deviation show such implications.

4. Conclusions

In this paper, we have worked with a stochastic differential equation generalized logistic stock growth model. From this model, we show how to obtain particular models typically used for harvesting problems. We have formulated a stochastic optimal control problem to obtain the optimal variable effort policy. We deduced the HJB equation using dynamic programming techniques and showed its solution through numerical methods by applying a Crank–Nicolson discretization scheme. We have applied the optimal policy for several particular models, considering realistic parameter values estimated from real stock. We conclude that models with a higher growth rate imply higher profit values, as expected. In addition, we examined the sensitivity of the expected total discounted profit to the discount factor, cost parameters, and stochastic environmental fluctuations.

Funding

The author was partially supported by the projects CEMAPRE/REM-UIDB/05069/2020 and EXPL/EGE-IND/0351/2021, both financed by FCT/MCTES through national funds.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

To obtain the Hamilton–Jacobi–Bellman Equation (4), appropriate assumptions must be made and the following approximations must be used for small values of Δ t > 0 , which leads to error terms of o ( Δ t ) as Δ t 0 , while being careful to use Itô calculus, i.e., a second order Taylor expansion in x:
(A)
Δ t is a small positive quantity;
(B)
e δ Δ t 1 δ Δ t ;
(C)
X ( t + Δ t ) X ( t ) + Δ X ( t ) ;
(D)
Δ X ( t ) f ( X ( t ) ) X ( t ) Δ t q E ( t ) X ( t ) Δ t + σ X ( t ) Δ W ( t ) ;
(E)
J * ( X ( t + Δ t ) , t + Δ t ) is known;
(F)
during time interval [ t , t + Δ t ] , the control E ( t ) is constant.
Now, we consider the current value form of Equation (3):
J * ( X ( t ) , t ) = max E ( τ ) t τ T E t , X ( t ) t T e δ ( τ t ) p 1 q X ( τ ) c 1 ( p 2 q 2 X 2 ( τ ) + c 2 ) E ( τ ) E ( τ ) d τ ,
and divide the integrand function in two parts as follows, with t + Δ t < T :
J * ( X ( t ) , t ) = max E ( τ ) t τ T E t , X ( t ) [ t t + Δ t e δ ( τ t ) p 1 q X ( τ ) c 1 ( p 2 q 2 X 2 ( τ ) + c 2 ) E ( τ ) E ( τ ) d τ + t + Δ t T e δ ( τ t ) + Δ t Δ t p 1 q X ( τ ) c 1 ( p 2 q 2 X 2 ( τ ) + c 2 ) E ( τ ) E ( τ ) d τ ] .
Using assumption (B), we can write
J * ( X ( t ) , t ) = max E ( τ ) t τ T E t , X ( t ) [ t t + Δ t e δ ( τ t ) p 1 q X ( τ ) c 1 ( p 2 q 2 X 2 ( τ ) + c 2 ) E ( τ ) E ( τ ) d τ + ( 1 δ Δ t ) t + Δ t T e δ ( τ ( t + Δ t ) p 1 q X ( τ ) c 1 ( p 2 q 2 X 2 ( τ ) + c 2 ) E ( τ ) E ( τ ) d τ ] + o ( Δ t ) .
Applying Bellman’s principle of optimality (see [18,19,20]), we have
J * ( X ( t ) , t ) = max E ( τ ) t τ t + Δ t E t , X ( t ) [ t t + Δ t e δ ( τ t ) p 1 q X ( τ ) c 1 ( p 2 q 2 X 2 ( τ ) + c 2 ) E ( τ ) E ( τ ) d τ + ( 1 δ Δ t ) max E ( τ ) t + Δ t τ T t + Δ t T e δ ( τ ( t + Δ t ) ( p 1 q X ( τ ) c 1 ( p 2 q 2 X 2 ( τ ) + c 2 ) E ( τ ) ) E ( τ ) d τ ] + o ( Δ t ) .
Alternatively, using assumptions (C) and (F),
J * ( X ( t ) , t ) = max E ( τ ) t τ T E t , X ( t ) [ p 1 q X ( t ) c 1 ( p 2 q 2 X 2 ( t ) + c 2 ) E ( t ) E ( t ) Δ t + ( 1 δ Δ t ) J * ( X ( t ) + Δ X ( t ) , t + Δ t ) ] + o ( Δ t ) .
The Taylor series of J * ( X ( t ) + Δ X ( t ) , t + Δ t ) about ( X ( t ) , t ) provides
J * ( X ( t ) + Δ X ( t ) , t + Δ t ) = J * ( X ( t ) , t ) + J * ( X ( t ) , t ) t Δ t + J * ( X ( t ) , t ) X ( t ) Δ X ( t ) + 1 2 2 J * ( X ( t ) , t ) X 2 ( t ) ( Δ X ( t ) ) 2 + o ( Δ t ) .
Writing f ( X ( t ) ) = r X a 1 ( t ) 1 X ( t ) K b c and replacing Δ X ( t ) in (A2) with the approximation in (D), we have
J * ( X ( t ) + Δ X ( t ) , t + Δ t ) = J * ( X ( t ) , t ) + J * ( X ( t ) , t ) t Δ t + J * ( X ( t ) , t ) X ( t ) f ( X ( t ) ) X ( t ) Δ t q E ( t ) X ( t ) Δ t + σ X ( t ) Δ W ( t ) + 1 2 2 J * ( X ( t ) , t ) X 2 ( t ) f ( X ( t ) ) X ( t ) Δ t q E ( t ) X ( t ) Δ t + σ X ( t ) Δ W ( t ) 2 + o ( Δ t ) .
After a few simplifications, we can write
J * ( X ( t ) + Δ X ( t ) , t + Δ t ) = J * ( X ( t ) , t ) + J * ( X ( t ) , t ) t Δ t + J * ( X ( t ) , t ) X ( t ) f ( X ( t ) ) q E ( t ) X ( t ) Δ t + J * ( X ( t ) , t ) X ( t ) σ X ( t ) Δ W ( t ) + 1 2 2 J * ( X ( t ) , t ) X 2 ( t ) σ 2 X 2 ( t ) ( Δ W ( t ) ) 2 + 2 J * ( X ( t ) , t ) X 2 ( t ) f ( X ( t ) ) q E ( t ) σ X 2 ( t ) Δ W ( t ) Δ t + o ( Δ t ) .
Replacing J * ( X ( t ) + Δ X ( t ) , t + Δ t ) from (A3) in (A1) provides
J * ( X ( t ) , t ) = max E ( t ) E t , X ( t ) [ p 1 q X ( t ) c 1 ( p 2 q 2 X 2 ( t ) + c 2 ) E ( t ) E ( t ) Δ t + ( 1 Δ d t ) ( J * ( X ( t ) , t ) + J * ( X ( t ) , t ) t Δ t + J * ( X ( t ) , t ) X ( t ) f ( X ( t ) ) q E ( t ) X ( t ) Δ t + J * ( X ( t ) , t ) X ( t ) σ X ( t ) Δ W ( t ) + 1 2 2 J * ( X ( t ) , t ) X 2 ( t ) σ 2 X 2 ( t ) ( Δ W ( t ) ) 2 + 2 J * ( X ( t ) , t ) X 2 ( t ) f ( X ( t ) ) q E ( t ) σ X 2 ( t ) Δ W ( t ) Δ t + o ( Δ t ) ) ] .
We can deduce from the Wiener process properties that E t , X ( t ) [ Δ W ( t ) ] = 0 and E t , X ( t ) [ ( Δ W ( t ) ) 2 ] = Δ t . Thus, rearranging the latter equation provides
0 = max E ( t ) { p 1 q X ( t ) c 1 ( p 2 q 2 X 2 ( t ) + c 2 ) E ( t ) E ( t ) Δ t δ J * ( X ( t ) , t ) Δ t + J * ( X ( t ) , t ) t Δ t + J * ( X ( t ) , t ) X ( t ) f ( X ( t ) ) q E ( t ) X ( t ) Δ t + 1 2 2 J * ( X ( t ) , t ) X 2 ( t ) σ 2 X 2 ( t ) Δ t + o ( Δ t ) } .
Dividing (A4) by Δ t and allowing Δ t 0 results in
J * ( X ( t ) , t ) t = max E ( t ) { p 1 q X ( t ) c 1 ( p 2 q 2 X 2 ( t ) + c 2 ) E ( t ) E ( t ) δ J * ( X ( t ) , t ) + J * ( X ( t ) , t ) X ( t ) f ( X ( t ) ) q E ( t ) X ( t ) + 1 2 2 J * ( X ( t ) , t ) X 2 ( t ) σ 2 X 2 ( t ) } ,
where J * ( X ( t ) , t ) represents the expected value of the maximized accumulated discounted profit earned from the start of harvesting at time t to the end at time T.
The above Hamilton–Jacobi–Bellman equation is the solution to the stochastic control problem stated in Section 2.
The optimal variable effort is obtained from the HJB Equation (A5). Here, we denote by D a function that represents the control switching term in (A5), that is,
D ( E ) = p 1 q X ( t ) c 1 ( p 2 q 2 X 2 ( t ) + c 2 ) E ( t ) E ( t ) J * ( X ( t ) , t ) X ( t ) q E ( t ) X ( t ) ,
and denote the unconstrained effort resulting from the maximization in Equation (A6) as E u * ( t ) . Thus, by solving the equation d D ( E ) / d E = 0 with respect to E, we obtain E u * ( t ) :
E u * ( t ) = p 1 J * ( X ( t ) , t ) X ( t ) q X ( t ) c 1 2 p 2 q 2 X 2 ( t ) + c 2 .
The maximized HJB equation is obtained by representing the constrained optimal effort as E * ( t ) and replacing E ( t ) with E * ( t ) in Equation (A5):
J * ( X ( t ) , t ) t = ( p 1 q X ( t ) c 1 ) E * ( t ) ( p 2 q 2 X 2 ( t ) + c 2 ) E * 2 ( t ) δ J * ( X ( t ) , t ) + J * ( X ( t ) , t ) X ( t ) f ( X ( t ) ) q E * ( t ) X ( t ) + 1 2 2 J * ( X ( t ) , t ) X 2 ( t ) σ 2 X 2 ( t ) ,
where the effort is provided by
E * ( t ) = E m i n , i f E u * ( t ) < E m i n E u * ( t ) , i f E m i n E u * ( t ) E m a x E m a x , i f E u * ( t ) > E m a x ,
and with
E u * ( t ) = p 1 J * ( X ( t ) , t ) X ( t ) q X ( t ) c 1 2 p 2 q 2 X ( t ) 2 + c 2
being the unconstrained effort.

References

  1. Beddington, J.R.; May, R.M. Harvesting natural populations in a randomly fluctuating environment. Science 1977, 197, 463–465. [Google Scholar] [CrossRef] [PubMed]
  2. Clark, C.W. Mathematical Bioeconomics: The Optimal Management of Renewable Resources, 2nd ed.; Wiley: New York, NY, USA, 1990. [Google Scholar]
  3. Alvarez, L.H.R.; Sheep, L.A. Optimal harvesting of stochastically fluctuating populations. J. Math. Biol. 1998, 37, 155–177. [Google Scholar] [CrossRef]
  4. Alvarez, L.H.R. On the option interpretation of rational harvesting planning. J. Math. Biol. 2000, 40, 383–405. [Google Scholar] [CrossRef]
  5. Alvarez, L.H.R. Singular stochastic control in the presence of a state-dependent yield structure. Stoch. Process. Their Appl. 2000, 86, 323–343. [Google Scholar] [CrossRef]
  6. Brites, N.M.; Braumann, C.A. Harvesting in a Random Varying Environment: Optimal, Stepwise and Sustainable Policies for the Gompertz Model. Stat. Optim. Inf. Comput. 2019, 7, 533–544. [Google Scholar] [CrossRef]
  7. Brites, N.M.; Braumann, C.A. Harvesting optimization with stochastic differential equations models: Is the optimal enemy of the good? Stoch. Model. 2021, 0, 1–19. [Google Scholar] [CrossRef]
  8. Brites, N.M.; Braumann, C.A. Profit optimization of stochastically fluctuating populations: The effects of Allee effects. Optimization 2022, 1–12. [Google Scholar] [CrossRef]
  9. Brites, N.M.; Braumann, C.A. Fisheries management in random environments: Comparison of harvesting policies for the logistic model. Fish. Res. 2017, 195, 238–246. [Google Scholar] [CrossRef]
  10. Brites, N.M.; Braumann, C.A. Fisheries management in randomly varying environments: Comparison of constant, variable and penalized efforts policies for the Gompertz model. Fish. Res. 2019, 216, 196–203. [Google Scholar] [CrossRef]
  11. Brites, N.M.; Braumann, C.A. Stochastic differential equations harvesting policies: Allee effects, logistic-like growth and profit optimization. Appl. Stoch. Model. Bus. Ind. 2020, 36, 825–835. [Google Scholar] [CrossRef]
  12. Brites, N.M.; Braumann, C.A. Harvesting Policies with Stepwise Effort and Logistic Growth in a Random Environment. In Current Trends in Dynamical Systems in Biology and Natural Sciences; Aguiar, M., Braumann, C., Kooi, B.W., Pugliese, A., Stollenwerk, N., Venturino, E., Eds.; Springer: Cham, Switzerland, 2020; pp. 95–110. [Google Scholar] [CrossRef]
  13. Shah, M.A.; Sharma, U. Optimal harvesting policies for a generalized Gordon–Schaefer model in randomly varying environment. Appl. Stoch. Model. Bus. Ind. 2003, 19, 43–49. [Google Scholar] [CrossRef]
  14. Tsoularis, A.; Wallace, J. Analysis of logistic growth models. Math. Biosci. 2002, 179, 21–55. [Google Scholar] [CrossRef]
  15. Verhulst, P.F. Notice sur la loi que la population poursuit dans son accroissement. Corresp. Math. Phys. 1838, 10, 113–121. [Google Scholar]
  16. Braumann, C.A. Stochastic differential equation models of fisheries in an uncertain world: Extinction probabilities, optimal fishing effort, and parameter estimation. In Proceedings of the Mathematics in Biology and Medicine, Bari, Italy, 18–22 July 1985; Capasso, V., Grosso, E., Paveri-Fontana, S.L., Eds.; Springer: Berlin, Germany, 1985; pp. 201–206. [Google Scholar]
  17. Braumann, C.A. Introduction to Stochastic Differential Equations with Applications to Modelling in Biology and Finance; John Wiley & Sons, Inc.: New York, NY, USA, 2019. [Google Scholar]
  18. Kamien, M.I.; Schwartz, N.L. Dynamic Optimization; North-Holland, The Netherlands, 1993. [Google Scholar]
  19. Fleming, W.H.; Soner, H.M. Controlled Markov Processes and Viscosity Solutions; Springer: New York, NY, USA, 2006. [Google Scholar]
  20. Touzi, N. Optimal Stochastic Control, Stochastic Target Problems, and Backward SDE; Springer: New York, NY, USA, 2013. [Google Scholar]
  21. Hanson, F.B.; Ryan, D. Optimal harvesting with both population and price dynamics. Math. Biosci. 1998, 148, 129–146. [Google Scholar] [CrossRef]
  22. LeVeque, R.J. Finite Difference Methods for Ordinary and Partial Differential Equations; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2007. [Google Scholar] [CrossRef]
Figure 1. Stock growth rate, d X ( t ) / d t , as a function of stock size, X ( t ) , for several particular cases of the GL model (deterministic case without harvesting).
Figure 1. Stock growth rate, d X ( t ) / d t , as a function of stock size, X ( t ) , for several particular cases of the GL model (deterministic case without harvesting).
Mathematics 10 03098 g001
Figure 2. Stock, fishing mortality, yield, and profit sample paths (thinner line) and mean (ticker line) obtained by the application of models G L ( 1 , 1 , 1 ) (top panel) and G L ( 1 , 1 , 2 ) (bottom panel). Unit values are described in Table 1.
Figure 2. Stock, fishing mortality, yield, and profit sample paths (thinner line) and mean (ticker line) obtained by the application of models G L ( 1 , 1 , 1 ) (top panel) and G L ( 1 , 1 , 2 ) (bottom panel). Unit values are described in Table 1.
Mathematics 10 03098 g002
Figure 3. Stock, fishing mortality, yield, and profit sample paths (thinner line) and mean (ticker line) obtained by the application of models G L ( 1 , 2 , 1 ) (top panel) and G L ( 1 , 2 , 2 ) (bottom panel). Unit values are described in Table 1.
Figure 3. Stock, fishing mortality, yield, and profit sample paths (thinner line) and mean (ticker line) obtained by the application of models G L ( 1 , 2 , 1 ) (top panel) and G L ( 1 , 2 , 2 ) (bottom panel). Unit values are described in Table 1.
Mathematics 10 03098 g003
Figure 4. Stock, fishing mortality, yield, and profit sample paths (thinner line) and mean (ticker line) obtained by the application of model G L ( 1 , 2 , 3 ) . Unit values are described in Table 1.
Figure 4. Stock, fishing mortality, yield, and profit sample paths (thinner line) and mean (ticker line) obtained by the application of model G L ( 1 , 2 , 3 ) . Unit values are described in Table 1.
Mathematics 10 03098 g004
Table 1. Parameter values used to run the simulations. The definition of an SFU (Standardized Fishing Unit) is explained in [21].
Table 1. Parameter values used to run the simulations. The definition of an SFU (Standardized Fishing Unit) is explained in [21].
ParameterDescriptionValueUnit
rIntrinsic growth rate 0.71 y e a r 1
KCarrying capacity 80.5 × 10 6 kg
qCatchability coefficient 3.30 × 10 6 S F U 1 y e a r 1
E m i n Minimum fishing effort0SFU
E m a x Maximum fishing effort 0.9   r / q SFU
σ Strength of environmental fluctuations 0.15 y e a r 1 / 2
xInitial population size 0.25 Kkg
δ Discount factor 0.03 y e a r 1
p 1 Linear price coefficient 1.59 $ k g 1
p 2 Quadratic price coefficient0 $ y e a r × k g 2
c 1 Linear cost coefficient 96 × 10 6 $ S F U 1 y e a r 1
c 2 Quadratic cost coefficient 10 7 $ S F U 2 y e a r 1
TTime horizon25year
nNumber of time sub-intervals100 
x m a x Maximum stock size 2 K kg
mNumber of sub-intervals for the space state (b)100 
Table 2. Optimal profit and standard deviation values for the particular cases of the GL growth model in Section 2.1. Values are in millions of US dollars. SD represents standard deviation.
Table 2. Optimal profit and standard deviation values for the particular cases of the GL growth model in Section 2.1. Values are in millions of US dollars. SD represents standard deviation.
Model RepresentationJ*SD
G L ( 1 , 1 , 1 ) 374.14832.351
G L ( 1 , 1 , 2 ) 214.86432.205
G L ( 1 , 2 , 1 ) 565.45638.365
G L ( 1 , 2 , 2 ) 432.17430.185
G L ( 1 , 2 , 3 ) 366.37725.182
Table 3. Alternative parameter values for model G L ( 1 , 2 , 3 ) . The parameters used to perform the sensitivity analysis were δ , c 1 , c 2 and σ . Profit and standard deviation values are in millions of US dollars. SD represents standard deviation.
Table 3. Alternative parameter values for model G L ( 1 , 2 , 3 ) . The parameters used to perform the sensitivity analysis were δ , c 1 , c 2 and σ . Profit and standard deviation values are in millions of US dollars. SD represents standard deviation.
ParameterValueJ*SDParameterValueJ*SD
δ 0.03 366.37725.182 c 2 10 7 366.37725.182
  0.10 183.55314.512  10 5 362.62925.107
  0.15 126.95311.153  10 3 187.00713.932
c 1 96 × 10 6 366.37725.182 σ 0.15 366.37725.182
  96 × 10 4 366.35525.181  0.30 354.34549.656
  96 × 10 2 364.14625.048  0.45 270.54081.259
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Brites, N.M. Optimal Harvesting of Stochastically Fluctuating Populations Driven by a Generalized Logistic SDE Growth Model. Mathematics 2022, 10, 3098. https://doi.org/10.3390/math10173098

AMA Style

Brites NM. Optimal Harvesting of Stochastically Fluctuating Populations Driven by a Generalized Logistic SDE Growth Model. Mathematics. 2022; 10(17):3098. https://doi.org/10.3390/math10173098

Chicago/Turabian Style

Brites, Nuno M. 2022. "Optimal Harvesting of Stochastically Fluctuating Populations Driven by a Generalized Logistic SDE Growth Model" Mathematics 10, no. 17: 3098. https://doi.org/10.3390/math10173098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop