Next Article in Journal
Effect of Collaborative Innovation on High-Quality Economic Development in Beijing–Tianjin–Hebei Urban Agglomeration—An Empirical Analysis Based on the Spatial Durbin Model
Previous Article in Journal
Observability of Discrete-Time Two-Time-Scale Multi-Agent Systems with Heterogeneous Features under Leader-Based Architecture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scaled Muth–ARMA Process Applied to Finance Market

by
Abraão D. C. Nascimento
1,
Maria C. S. Lima
1,
Hassan Bakouch
2,3 and
Najla Qarmalah
4,*
1
Statistics Department, Universidade Federal de Pernambuco, Recife 50670-901, Brazil
2
Department of Mathematics, College of Science, Qassim University, Buraydah 51452, Saudi Arabia
3
Department of Mathematics, Faculty of Science, Tanta University, Tanta 31111, Egypt
4
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P. O. Box 84428, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(8), 1908; https://doi.org/10.3390/math11081908
Submission received: 23 March 2023 / Revised: 15 April 2023 / Accepted: 16 April 2023 / Published: 18 April 2023

Abstract

:
The analysis of financial market time series is an important source for understanding the economic reality of a country. We introduce a new autoregressive moving average (ARMA) process, the sMuth–ARMA model, which has the sMuth law as the marginal distribution and has one of its parameters as a proportion that can control amodal and unimodal behavior. We propose a procedure for obtaining the maximum likelihood estimators for its parameters and evaluate its performance for various link functions through Monte Carlo simulations. This research also addresses the issue of fluctuations in cryptocurrencies, which has played an increasingly important role in the global economy. An application to the range-based volatility of Tether (USDT) stablecoin prices shows the usefulness of the application of the proposed model over the Gaussian and other models reviewed.

1. Introduction

Many applications, in practice, have a mathematical treatment by time series analysis (TSA), such as the following: economic indices [1], financial market values [2], hydrological variables [3], image processing [4], among others. The goal of TSA is twofold: to describe the dependence structure of real-world phenomena and to make predictions.
The autoregressive moving average (ARMA) models are often used to describe time series [5]. This is justified because these models are very promising due to their mathematical properties (related to the existence of a stochastic process, the relationship between the autocorrelation function and its spectral response) and statistical properties (estimators based on various criteria, associated asymptotic results, test under parameters, model selection criteria, diagnosis, and so forth). The ARMA models are often assumed to follow Gaussian distribution [6], which makes their use impractical in many applications [7].
Several non-Gaussian time series models have been introduced in the TSA literature to fill this gap, and they can be categorized as follows:
  • Conditional, by the regression approach or data-based. First, Benjamim et al. [8] proposed the generalized ARMA process (GARMA) with a marginal distribution from the exponential family (from the discrete point of view: Poisson, binomial, …; from the continuous point of view Gaussian, gamma, inverse Gaussian, …). Rocha and Cribari-Neto [9] introduced the beta ARMA (BARMA) model for the analysis of proportions. The Kumaraswamy ARMA (KARMA) model, introduced by Bayer et al. [10], has been used for hydrological data. Recently, Almeida and Nascimento [4] introduced the  G I 0 -ARMA process to describe intensities in synthetic aperture radar images.
  • Non-conditional or innovations-based:
    Discrete case: Autoregressive (AR) processes with binomial distribution [11], ARMA with marginals negative binomial and geometric marginals and, in general, processes for integer values under subdispersion [12].
    Continuous case: Gamma processes [13], AR models with marginal generalized Laplace distribution [14] and AR processes with epsilon–skew–Gaussian innovations [15].
In this paper, we introduce a conditional model for time series having a marginal Muth distribution, pioneered by Jodrá et al. [16], with the cumulative distribution function (cdf) and probability density function (pdf) given by
G ( x ; α ) = 1 exp α x 1 α exp α x 1 , x > 0 .
and
g ( x ; α ) = exp α x α exp α x 1 α exp α x 1 ,
respectively. The k-th moment of a random variable  Y Muth ( α )  can be expressed as
E ( Y k ) = e 1 / α Γ ( k + 1 ) α k E 0 k 1 ( 1 / α ) , for k = 1 , 2 , ,
where  E s m ( z ) : = Γ 1 ( m + 1 ) 1 ( log u ) m u s e s u d u , which is a generalized exponential integral function. This distribution was designed to describe the frequency of death due to solely to aging and is also known as the Teissier distribution, introduced by Teissier [17]. This under-analysis law is a scaled version of Muth’s one-parameter distribution, introduced by Muth [18] to explain its relation to reliability. Some of the mathematical properties of the Muth distribution were derived by Jodrá et al. [16]. We show the potential of this distribution in the financial field of cryptocurrencies, by using it in data-based TSA.
To show the physical interpretation of the approach used, we describe an example. A relationship between two (or more) variables, such as the temperature at which a chemical reaction is performed, is of interest. One of these variables, say X, is known, while the other is to be predicted, say Y. We cannot predict Y since it is random. Therefore, we estimate its mean, which is the conditional expectation  E ( Y | X ) . This expectation,  E ( Y | X ) , can have several forms, including linear, quadratic, and exponential. Here, we are interested in the linear ARMA time series form. In this paper, we present a novel ARMA process that has the marginal sMuth law. Several of its mathematical properties are derived: Score vector and Fisher information matrix (FIM). We provide a procedure to obtain conditional maximum likelihood estimators for the sMuth–ARMA parameters. Finally, we apply our model to describe cryptocurrency range-based volatility. Cryptocurrencies are financial instruments that are more volatile than fiat currencies, have different time scale impacts, and contribute to extreme events [19]. A plausible way to conceptualize extreme events is to characterize them as statistically improbable events. Even with a high probability of not occurring, these extreme events are observed in financial, social, and natural systems and have complex consequences [20]. The results argue in favor of our proposal for describing the cryptocurrency range-based volatility of the Tether asset in comparison to the autoregressive integrated moving average (ARIMA) model and a class of state space models.
This paper is divided into sections as follows. In Section 2, we define the sMuth–ARMA model and provide some of its mathematical properties. The estimation procedure of the parameters uses conditional maximum likelihood. Some Monte Carlo simulations in Section 3 examine the precision of the estimates of the sMuth–ARMA parameters. Further, we show the sMuth–ARMA regression offers better performance than Gaussian-ARMA and other models fitted to the financial data. Some conclusions are offered in Section 4.

2. The sMuth–ARMA Process: Estimation and Forecasting

This section addresses the sMuth–ARMA model, conditional or by regression, by assuming that the marginal model is the sMuth distribution.
Let  μ R +  and  Y Muth ( α ) , then  X : = μ × Y  is such that its cdf and pdf are given, respectively,
F X ( x ; μ , α ) = 1 exp α μ x 1 α exp α μ x 1
and
1 1 f X ( x ; μ , α ) = 1 μ exp α μ x α exp α μ x 1 α exp α μ x 1 ,
where  α ( 0 , 1 ]  and  x , μ > 0 . This distribution was coined the scaled Muth law by Biçer et al. [21] and is denoted as  X sMuth ( α , μ ) . Note that
E ( X ) = μ and V ar ( X ) = μ 2 2 e 1 / α α Γ ( 0 , 1 / α ) 1 ,
where  Γ ( a , z ) = z t a 1 e t d t  is the upper incomplete gamma function.
Figure 1 displays some sMuth density curves. It is noticeable that the increase of  α  converts amodal curves into unimodal ones, while  μ  clearly addresses the sMuth mean, as expected.
The proposed regression is similar to that proposed when constructing GARMA [8], BARMA [9], KARMA [10], and  G I 0 ARMA models [4]. So, we need to define two components: systematic and random.
First, for the systematic component, we adopt the following , based on Almeida-Junior and Nascimento [4]. Let  { x t ; t = 1 , , n }  be an observed series of  { X t ; t = 1 , , n } μ t = E ( X t F t 1 )  denotes the mean of the series having sMuth law at instant t, and  F t = σ ( X t , X t 1 , )  represents a  σ -algebra generated from the observed values until to the instant t, and then the strictly monotonic and doubly differentiable link function, say  g ( · ) , which aims to relate  μ t  to the linear predictor  η t  is
g ( μ t ) = η t = δ + z t β + i = 1 p ϕ i g ( x t i ) z t i β + k = 1 q θ k r t k ,
where
  • r t = g ( x t ) g ( μ t )  is the error term having a recursive nature, although it can be defined with a martingale difference nature, such as the approach adopted by Zheng et al. [22] associated to Equation (6). As discussed by Scher et al. [23] (p. 3), we adopt the error term  r t  in Equation (6), which is in a residual fashion; i.e., a stochastic process, such as innovations in standard ARMA models, is not represented. The time series outcomes designed from our model come from a conditional distribution, and the respective moving average error,  r t , represents the difference between an observed quantity,  g ( x t ) , and the corresponding model-based quantity,  g ( μ t ) .
  • δ  denotes the intercept.
  • z t R k ( t = 1 , , n )  is a set of k other time series that occur together with the worked series,  x t . Although we did not focus on an exogenous variable in this paper, we opted to explicitly refer to it in Equation (6) by generality.
  • β = ( β 1 , , β k )  is a vector of unknown parameters associated with the covariates  z t = ( z t 1 , , z t k ) ; and
  • ϕ = ( ϕ 1 , , ϕ p )  and  θ = ( θ 1 , , θ q )  are the parameter vectors of the AR and MA structures, respectively.
The stationarity condition in this discussion follows that proposed by Woodard et al. [24]. We chose three link functions for the mapping  g ( · ) : R + R , such that  μ t = g 1 ( η t )  if and only if  η t = g ( μ t ) : the logarithmic, the square root, and the logarithmic of the Lambert’s W log W ( · ) . Almeida and Nascimento [4] addressed details on using  log W ( · )  for variables having positive support.
The random component is defined as follows.
Definition 1. 
Let  { X t ; t = 1 , , n }  be a sequence of random variables following the sMuth( α , μ ) model and  F t = σ ( X t , X t 1 , ) . Thus, the sMuth–ARMA model is such that  X t F t 1  follows the sMuth distribution,  X t | F t 1 sMuth ( α , μ t )  has density
f X ( x t F t ) = 1 μ t exp α μ t x t α exp α μ t x t 1 α exp α μ t x t 1 ,
where  μ t = E ( X t F t 1 ) = g 1 ( η t )  is given in Equation (6) or from the summation of both sides of Equation (6) by  g ( x t ) g ( μ t ) ,
g ( x t ) = δ + z t β + i = 1 p ϕ i g ( x t i ) z t i β + k = 1 q θ k r t k + r t ,
which corresponds to the traditional definition of an ARMA process with exogenous terms.
The model of the Definition 1 is denoted as  X t sMuth ARMA ( p , q ) .

2.1. Generator of the sMuth–ARMA Model

Here, we illustrate some sub-models of the sMuth–ARMA( p , q ) model. Note that the sMuth–ARMA model possesses the property that the time series, resulting from transformation by the link function, has an ARMA structure:
  • sM-AR(1):  g ( μ t ) = δ + ϕ g ( x t 1 ) , or, equivalently,  g ( x t ) = δ + ϕ g ( x t 1 ) + g ( x t ) g ( μ t ) r t ;
  • sM-MA(1):  g ( μ t ) = δ + θ [ g ( x t 1 ) g ( μ t 1 ) ]  or, equivalently,  g ( x t ) = δ + θ r t 1 + r t ;
  • sM-ARMA(1,1):  g ( μ t ) = δ + ϕ g ( x t 1 ) + θ [ g ( x t 1 ) g ( μ t 1 ) ]  or, equivalently,  g ( x t ) = δ + ϕ g ( x t 1 ) + θ r t 1 + r t .
These sub-models are generated following Algorithm 1. Here we omit, for simplicity when we describe only one time series, the influence of both exogenous variables in  z t  and the parameter vector  β .
Algorithm 1. 
Random generator for the observed series from the sMuth–ARMA ( p , q )  model.
1:
Define values for the parameters  α δ ϕ = ( ϕ 1 , , ϕ p ) θ = ( θ 1 , , θ q )  and for pairs  ( x i , μ i )  such that  i = 1 , , m ,  and  m = max { p , q } ;
2:
Calculate  μ t  (for  t m + 1 ) using:
μ t = g 1 δ + i = 1 p ϕ i g ( x t i ) + i = 1 q θ i [ g ( x t i ) g ( μ t i ) ] .
3:
Generate  X t s M u t h ( α , μ t )  for  t = m + 1 , , n .
4:
Repeat steps 2 and 3 until n values of the series are generated.
The following are expected from the classic TSA literature [5] for the ARMA(1,1):
  • the AR(1) process has a cut at the first lag of the partial autocorrelation function (PACF), while its autocorrelation function (ACF) experiences exponential or senoidal decrease;
  • the MA(1) process has a cut at the first lag of ACF, while its PACF experiences exponential or senoidal decrease;
  • the ARMA(1,1) process does not have clearly defined cuts.
Figure 2 displays the simulation scenarios for the sMuth–AR(1), sMuth–MA(1), and sMuth–ARMA(1,1) models. It is noted that the expected behavior is achieved, and the cuts in sample pairs (ACF, PACF) can also be used as a selection mechanism for the sMuth–ARMA ( p , q )  models.

2.2. Estimation

We estimate the sMuth–ARMA parameters based on the conditional maximum likelihood (CML). Let  { x 1 , , x n }  be an observed series obtained from the random series  { X 1 , , X n }  such that  X t | F t 1 sMuth ARMA ( p , q ) : { α , μ t } , where  μ t  is given in Equation (6), and  the parameter vector is  π = ( δ , β , ϕ , θ , α ) .
The CML estimation maximizes the conditional log-likelihood function at  π , namely
( π ) = ( π ; x m + 1 , , x n ) = t = m + 1 n log f ( x t ; π F t ) t ( π ; x t ) = t = m + 1 n t ( π ; x t ) ,
where the summation starting at  m + 1  indicates the necessary assumption of m elements in  F t  and
t ( π ; x t ) = log μ t + log exp α x t μ t α + α x t μ t + 1 α 1 α exp α x t μ t .
An important quantity in this estimation method is the score function,
U ( π ) = [ U δ , U β , U ϕ , U θ , U α ] : = ( π ) δ , ( π ) β , ( π ) ϕ , ( π ) θ , ( π ) α ,
the components of which are determined in the next proposition. It is worth noting that while our analytical development is applicable to all parameters, allowing potential users to apply sMuth–ARMA in different contexts, we did not focus on exogenous variables (i.e., not  β ) in this paper.
Proposition 1. 
Let  U ( π )  be the score vector associated to the observed series  { x t ; t = 1 , , n } , outcome of  { X t ; t = 1 , , n } , such that  [ X t | F t 1 ] sMuth ( α , μ t ) . Then,  U ( π )  is determined by
U δ = s T a , U β = M T a , U ϕ = P T a , U θ = R T a ,
and
U α = t = m + 1 n x t μ t exp α x t μ t 1 exp α x t μ t α + x t μ t 1 α 2 + 1 α 2 exp α x t μ t x t α μ t exp α x t μ t ,
where  T : = diag { 1 / g ( μ m + 1 ) , , 1 / g ( μ n ) } a = [ A m + 1 , , A n ]  such that
A t : = 1 μ t α x t μ t 2 exp α x t μ t exp α x t μ t α 1 + x t μ t 2 exp α x t μ t α ,
s = [ s m + 1 , , s n ]  such that
s l : = 1 k = 1 q θ k g ( μ l + m ) g ( η l + m k ) η l + m k δ ,
M = [ m m + 1 , , m n ]  is a  ( n m ) × k  matrix whose t-th row is given by
m t : = z t + m i = 1 p ϕ i z t + m i k = 1 q θ k g ( μ t + m ) g ( η t + m k ) η t + m k β ,
P = [ p m + 1 , , p n ]  is a  ( n m ) × p  matrix whose  ( t , i ) - t h  entry can be used to determine the t- t h  row
p t : = { ( t , i ) a n d i = 1 , , p | g ( x t + m i ) z t + m i β k = 1 q θ k g ( μ t + m ) g ( η t + m k ) η t + m k ϕ i } ,
and  R = [ r m + 1 , , r n ]  is a  ( n m ) × q  matrix whose  ( t , i ) -th entry can be used to determine the t-th row
r t : = { ( t , i ) a n d i = 1 , , q | r i + m t k = 1 q θ k g ( μ i + m ) g ( η i + m k ) η i + m k θ i } .
Some algebraic details on the derivation of these terms can be found in Appendix A.
Thus, the CML estimators (CMLEs) are defined as
π ^ = a r g   m a x R × R k × R p × R q × ( 0 , 1 ] [ ( π ) ]
or, equivalently, they can be obtained as a solution of the non-linear equations:  U ( π ) | π = π ^ = 0 .
However, we can adopt iterative methods to obtain the CMLEs, since these equations do not have closed-form expressions. We used the Nelder–Mead iterative method accessible in the R programme language and, as the initial point of the iterative process, we considered the ordinary least square estimates of  [ δ , β , ϕ , θ ] , namely
[ δ , β , ϕ , θ ] ˜ = ( Z Z ) 1 Z g ( x ) ,
where
Z = 1 n m , z t , 1 , , z t , k , g ( x t 1 ) , , g ( x t p ) , r t 1 , , r t q ; t = m + 1 , , n ,
1 n m  is a  n m -points vector of 1 and  z t , j  is the j-th column of  z t .
An initial estimate for  α , say  α ˜ , given  μ t  evaluated at  [ δ , β , ϕ , θ ] ˜ , say  μ t ˜ , is given by solution of the non-linear equation:
t = m + 1 n x t μ ˜ t exp α ˜ x t μ ˜ t 1 exp α ˜ x t μ ˜ t α ˜ + 1 α ˜ 2 1 + exp α ˜ x t μ ˜ t + x t μ t 1 1 α exp α ˜ x t μ ˜ t = 0 .
Another important statistical quantity to be derived is the FIM for the sMuth–ARMA process. After some algebraic manipulations, the FIM is given by
K ( δ ) = E ( π ) π ( π ) π | F t 1 = E 2 ( π ) δ π | F t 1 = K α α K α δ K α β K α ϕ K α θ K δ α K δ δ K δ β K δ ϕ K δ θ K β α K β δ K β β K β ϕ K β θ K ϕ α K ϕ δ K ϕ β K ϕ ϕ K ϕ θ K θ α K θ δ K θ β K θ ϕ K θ θ ,
where the components are given in Appendix B.

2.3. Forecasting

Defining the prediction equation of a time series model plays a central role in its modeling. It is often understood as the projection operator of  X t  on a closed subspace of the specific Hilbert space [25] (Chapter 2), represented by a conditional expected value. From forecasting equations, one can represent an observed time series and forecast its future observations.
The forecasting equation for the sMuth–ARMA ( p , q )  model is based on  μ t , as follows:
μ t = g 1 δ + z t β + i = 1 p ϕ i g ( x t i ) z t i β + k = 1 q θ k r t k ,
where
r t = g ( x t ) g ( μ ^ t ) ; m < t < n , 0 ; t < m .
We used the following modification of Equation (10) to predict future values:
μ n + h = g 1 δ + z n + h β + i = 1 p ϕ i g ( x n + h i ) z n + h i β + k = 1 q θ k r t + h k .

3. Numerical Results and Applications

3.1. Simulation Study

We conducted Monte Carlo simulations to evaluate the accuracy of the CMLEs of the sMuth–ARMA parameters. We used, as assessment criteria, the mean of the estimated values and the associated mean square error (MSE). Further, we analyzed the influence of three link functions: the logarithmic, the square root, and the logarithmic of the Lambert’s W log W ( · ) . Almeida and Nascimento [4] addressed details on using  log W ( · )  for variables having positive support.
We considered 10,000 Monte Carlo replicas, taking into account the sample sizes  n { 49 , 121 400 } , and the parameters:  δ = 1 ϕ i = 0.5 , and  α { 0.1 , 0.5 } . Our objectives were twofold: (i) to assess whether the CMLEs of the proposed procedure were asymptotically as expected, and (ii) to test the impact of the additional parameter  α , the small values of which were more critical.
Table 1 and Table 2 report the values of these assessment measures for the sMuth–AR(1) and ARMA(1,1) models, respectively. For simplicity, we omitted the sMuth–MA(1) case, since the associated results were close to those of sMuth–AR(1). In general, the best values were obtained for  g ( x ) = log W ( x ) .
The values of the median converge to the specified values and the MSE values decreased for larger sample sizes  n { 25 , 100 , 200 } . In terms of the effect of  α  on the MLEs, the MSEs became higher for large values of  α .

3.2. Application to Real-Life Data

Tether (USDT) is a stablecoin with a price tied to $1.00. It is a blockchain-based cryptocurrency, having tokens in circulation which are backed by an equivalent quantity of US dollars. Traditional fiat currencies, such as the dollar, euro, or Japanese yen, are tracked by stablecoins, which are maintained in a specified bank account. We found that the time series model of Tether (USDT) stablecoin prices is nonstationary. In an alternative approach to satisfy the stationarity, we could use the volatilities of the prices. Modeling volatility has some advantages. After a financial crisis, or a new government regulation, there may be significant changes in the markets that are not necessarily captured by returns. In these cases, changes in variance indicate the arrival of information, and, thus, any dispersion measure can reflect the relationship between information flow and volatility in financial time series. Moreover, time series or regression modeling of the data usually assumes that the variance is constant or the process is stationary. When this assumption is not met, using volatilities instead of asset returns can lead to a more predictable model and, therefore, can be modeled quite well, as seen in Alizadeh et al. [26]. Chiang and Wang [27] also used stock market volatilities and fitted a conditional autoregressive range (CARR) series model to them. Range-based volatility over a fixed interval, for example, one day or one month is defined as:
X t = 100 [ log ( H t ) log ( L t ) ] ,
where  H t  and  L t  are the highest and lowest price of BITCOIN (BTC) in the interval  [ t 1 , t ] , which is a day or a month.
Now we are in a position to apply the sMuth–ARMA. After an initial pilot study, we chose the Box–Cox link function  g ( x ) = ( x λ 1 ) / λ  with  λ = 0.05  to real-life data. We looked at an analysis of USDT which kept cryptocurrency valuations stable, unlike the wild price swings of other popular cryptocurrencies, like Bitcoin and Ethereum. For this purpose, we used the measures mean absolute error (MAE), mean absolute percentage error (MAPE) and mean absolute scaled error (MASE) to evaluate the performance of the adaptations. These measures are commonly used in the literature to evaluate the accuracy of time series model predictions (Chen et al. [28]). It is worth noting that the choice of  g ( x )  can be made using MAE, MAPE and MASE. In the following, we define the above measures:
MAE = 1 n t = 1 n e t 2 , MAPE = 1 n t = 1 n | e t | | x t | , and MASE = 1 n 1 n t = 1 n | e t | 1 n 1 t = 2 n | e t * | ,
where  e t = x t x ^ t  is a step ahead prediction error and  e t * = x t x t 1  represents the first difference of  x t .
Figure 3 shows the observed USDT time series and its sample autocorrelation (ACF) and partial autocorrelation (PACF). The pair, ACF and PACF, indicates the presence of stationary and short memory values, while the observed series shows an exponential increase in the middle of the observed series. The Philip–Perron and Augmented Dickey–Fuller tests, with p-values around  0.01 , confirmed the previous conclusion and suggested the use of the ARMA-base process or smoothing methods through regression or state–space models. In what follows, we compare our observation-driven time series modeling proposal with the Gaussian ARMA process and with the very useful family of state–space models (SSMs) (which includes several well-known smoothing methods, such as moving average, exponential smoothing, additive and multiplicative Holt–Winters methods, and others) proposed in [29]. Hyndman et al. [29] coined the notation ETS (error, trend, seasonality) for the SSMs.
Looking at a brief descriptive analysis in Table 3, we can see the minimum ( X 1 : n ), the first quartile ( X ( 1 ) ), the median, mean, standard deviation (SD), the coefficient of variation (CV %), the third quartile  X ( 3 )  and the maximum ( X n : n ).
It follows that the USDT values varied in the range  ( 0.03407 , 2.78590 )  and the differences between the pairs ( X 1 : n X ( 1 ) ) and (Median, Mean) were not significant, while ( X ( 3 ) X n : n ) was significant, due to the exponential increase. The value of CV% indicates high dispersion.
Table 4 shows the estimates for the sMuth–ARMA and Gaussian–ARMA parameters. All estimates were well defined in terms of asymptotic confidence intervals. Figure 4 shows the results of predicting the observed series and indicates that all fits were acceptable. Applying the Box–Pierce test to the resulting ordinary residuals showed that the p-values confirmed the white noise assumption.
Finally, we compared the four previous models in terms of their predictive power. Table 5 shows the scores for MAE, MAPE and MASE. For these criteria, the sMuth–ARMA(1,1) performed better than the other models. In summary, our proposal can be used as a good representation technique for predicting positive time series in studies on classification and clustering of macroeconomic indices that rely heavily on the predictive capacity of time series models.

4. Conclusions

In this paper, we proposed a new ARMA process with the sMuth law serving as its marginal distribution, called the sMuth–ARMA process. A point worth mentioning in our proposal is that the novel process has an additional parameter to those that are conditioned by the ARMA structure,  α ( 0 , 1 ] . This parameter works as a proportion that affects the variance of the marginal variables. When this parameter approaches zero, our proposal describes phenomena with large variances, and when this parameter is closer to one, it describes situations with small variances. We discussed some mathematical properties of the new model and a conditional maximum likelihood-based estimation procedure for its parameters. The performance of the estimates was measured by means of Monte Carlo simulations. Finally, an application to the USDT was presented to illustrate the usefulness of our proposal. The results showed that the sMuth–ARMA model provides better performance than ARMA and ETS.
A potential limitation of our proposal is that the use of certain link functions may complicate the estimation of the sMuth–ARMA parameters; in particular, the intercept of the ARMA structure. This requires future users to apply robust or bias-corrected estimation techniques.
As a future work, we will introduce the sMuth–ARMA having martingale difference errors, such as those approached by Zheng et al. [22]. The proposal for refined statistical inference methods may be a source for future research.

Author Contributions

Conceptualization, H.B. and A.D.C.N.; methodology, A.D.C.N., M.C.S.L. and H.B.; software, A.D.C.N. and M.C.S.L.; validation, A.D.C.N. and M.C.S.L.; formal analysis, A.D.C.N. and M.C.S.L.; data curation, A.D.C.N., M.C.S.L. and H.B. writing—original draft preparation, A.D.C.N. and M.C.S.L.; writing—review and editing, A.D.C.N., H.B. and N.Q.; visualization, A.D.C.N., H.B. and N.Q.; supervision, A.D.C.N. and H.B.; project administration, A.D.C.N. and H.B. funding acquisition, N.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R376), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Data Availability Statement

The data is available from the authors upon reasonable request.

Acknowledgments

The authors gratefully acknowledge Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R376), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia for the financial support for this project. The first two authors are grateful to CNPq for their support.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Appendix A. Derivation of Score Function Components

The conditional likelihood function is
( π ) = t = m + 1 n t ( π ; x t ) .
Thus, the associated score function is
( π ) π = t = m + 1 n t ( π ; x t ) μ t d μ t d η t η t π .
It is possible to note from the generalized linear model theory that  d μ t / d η t = 1 / g ( μ t ) . Simple algebraic operations provide the following results:
t ( π ; x t ) μ t = 1 μ t α x t μ t 2 exp α x t μ t exp α x t μ t α 1 + x t μ t 2 exp α x t μ t α : = A t .
Further, one can notice
η t δ = 1 + k = 1 q θ k r t k δ , η t β = z t i = 1 p ϕ i z t i + k = 1 q θ k r t k β , η t ϕ i = g ( x t i ) z t i β + k = 1 q θ k r t k ϕ i , i = 1 , , p ,
and
η t θ l = r t l + k = 1 q θ k r t k θ l , l = 1 , , q .
Thus, we need to provide explicit expressions for the relevant derivatives considering the error  r t . By means of the same reasoning discussed by Rocha and Cribari-Neto [9], we consider that
r t = g ( x t ) g ( μ t ) .
In what follows, we consider that  λ  is a substitute for  δ β l ϕ i  or  θ k . So, we have
r t λ = r t η t η t λ = g ( μ t ) η t η t λ = g ( μ t ) g ( η t ) η t λ .
The following recursions for the derivatives of  η t  are obtained as:
η t δ = 1 k = 1 q θ k g ( μ t ) g ( η t k ) η t k δ , η t β = z t i = 1 p ϕ i z t i k = 1 q θ k g ( μ t ) g ( η t k ) η t k β , η t ϕ i = g ( x t i ) z t i β k = 1 q θ k g ( μ t ) g ( η t k ) η t k ϕ i
and
η t θ l = r t l k = 1 q θ k g ( μ t ) g ( η t k ) η t k θ l .
Then, we have
U δ : = δ = t = m + 1 n A t 1 g ( μ t ) 1 k = 1 q θ k g ( μ t ) g ( η t k ) η t k δ .
Let s be a  ( n m )  row vector whose i-th element is given by
η l + m δ = 1 k = 1 q θ k g ( μ l + m ) g ( η l + m k ) η l + m k δ .
Thus, we can write
U δ = s T a ,
where  T : = diag { 1 / g ( μ m + 1 ) , , 1 / g ( μ n ) } a = [ A m + 1 , , A n ]  and  s = [ s m + 1 , , s n ]  such that
s l : = 1 k = 1 q θ k g ( μ l + m ) g ( η l + m k ) η l + m k δ .
Further, we have
U β : = β = t = m + 1 n l t ( μ t , α ) μ t μ t η t η t β = t = m + 1 n A t 1 g ( μ t ) z t i = 1 p ϕ i z t i k = 1 q θ k g ( μ t ) g ( η t k ) η t k β .
Let  M  be a  ( n m ) × k  matrix whose t- t h  row is given by
m t : = z t + m i = 1 p ϕ i z t + m i k = 1 q θ k g ( μ t + m ) g ( η t + m k ) η t + m k β .
Thus, we can write
U β = M T a .
Now, differentiating with respect to  ϕ i  , for  i = 1 , , p , we have
U ϕ i : = ϕ i = t = m + 1 n A t 1 g ( μ t ) g ( x t i ) z t i β k = 1 q θ k g ( μ t ) g ( η t k ) η t k ϕ i .
Let  P  be a  ( n m ) × p  matrix whose  ( t , i ) -th entry can be used to determine the t-th row
p t : = { ( t , i ) a n d i = 1 , , p | g ( x t + m i ) z t + m i β k = 1 q θ k g ( μ t + m ) g ( η t + m k ) η t + m k ϕ i } .
So, we have
U ϕ i = P T a .
Analogously, it follows that
U θ j : = θ j = t = m + 1 n A t 1 g ( μ t ) r t l k = 1 q θ k g ( μ t ) g ( η t k ) η t k θ l .
Now, let  R  be a  ( n m ) × q  matrix whose  ( t , i ) -th entry can be used to determine the t-th row
r t : = { ( t , i ) a n d i = 1 , , p | r i + m t k = 1 q θ k g ( μ i + m ) g ( η i + m k ) η i + m k θ l } .
Therefore, we obtain
U θ j = R T a .
Finally,
U α : = l α = t = m + 1 n x t μ t exp α x t μ t 1 exp α x t μ t α + x t μ t 1 α 2 + 1 α 2 exp α x t μ t x t α μ t exp α x t μ t .

Appendix B. Derivation of FIM

Proposition A1. 
The FIM components are given by
K α α = E 2 α 2 | F t 1 = t = m + 1 n ( 1 α ) C t ( α C t ) 2 + 2 α 3 + 2 α 2 2 α 3 1 α C t , K α δ = E 2 α δ | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) [ k = 1 q θ k g ( μ t k ) g ( η t k ) α 2 η t k α δ k = 1 q θ k g ( μ t k ) g ( η t k ) 2 η t k α δ ] , K α β = E 2 α β | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) [ k = 1 q θ k g ( μ t k ) g ( η t k ) α 2 η t k α β k = 1 q g ( μ t k ) g ( η t k ) 2 η t k α β ] , K α ϕ i = E 2 α ϕ | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) [ k = 1 q θ k g ( μ t k ) g ( η t k ) α 2 η t k α ϕ i k = 1 q g ( μ t k ) g ( η t k ) 2 η t k α ϕ i ] , K α θ l = E 2 α θ l | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) [ g ( μ t ) g ( η t ) η t α k = 1 q θ k g ( μ t k ) g ( η t k ) α 2 η t k α θ l k = 1 q g ( μ t k ) g ( η t k ) 2 η t k α θ l ] K δ δ = E 2 δ 2 | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) 2 1 k = 1 q θ k g ( μ t ) g ( η t k ) η t k δ 2 , K δ β = E 2 δ β | F t 1 = t = m + 1 n k = 1 q E ( B t ) θ k g ( μ t ) g ( η t k ) g ( μ t ) 2 η t k δ β , K δ ϕ i = E 2 δ ϕ i | F t 1 = t = m + 1 n k = 1 q E ( B t ) θ k g ( μ t ) g ( η t k ) g ( μ t ) 2 η t k δ ϕ i ,
K δ θ l = E 2 δ θ l | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) g ( μ t l ) g ( η t l ) η t l θ l + t = m + 1 n k = 1 q E ( B t ) θ k θ l g ( μ t ) g ( η t k ) g ( μ t ) η t k δ B t θ k g ( μ t ) g ( η t k ) g ( μ t ) 2 η t k δ θ l , K β l β h = E 2 β l β h | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) 2 z t , l i = 1 p ϕ i z t i , l k = 1 q θ k g ( μ t k ) g ( η t k ) η t k β l × z t , h i = 1 p ϕ i z t i , h k = 1 q θ k g ( μ t k ) g ( η t k ) η t k β h , K β θ l = E 2 θ l β | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) g ( μ t l ) g ( η t l ) η t l β k = 1 q θ k g ( μ t ) g ( η t k ) 2 η t k θ l β , K θ l ϕ i = E 2 θ l ϕ i | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) g ( μ t l ) g ( η t l ) η t l ϕ i k = 1 q θ k g ( μ t ) g ( η t k ) 2 η t k θ l ϕ i , K θ l θ i = E 2 θ l θ i | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) [ g ( μ t l ) g ( η t l ) η t l θ i k = 1 q θ k g ( μ t k ) g ( η t k ) θ i 2 η t k θ i θ l k = 1 q θ k g ( μ t k ) g ( η t k ) 2 η t k θ l θ i ] , K ϕ i ϕ j = E 2 ϕ i ϕ j | F t 1 = t = m + 1 n E ( B t ) 1 g ( μ t ) [ k = 1 q θ k g ( μ t k ) g ( η t k ) ϕ j 2 η t k ϕ i ϕ j k = 1 q θ k g ( μ t k ) g ( η t k ) 2 η t k ϕ i ϕ j ] ,
where
E ( B t ) = 1 μ t 2 1 + α C ( α ) D ( α ) [ α 2 + 2 C ( α ) 2 α ] + 2 α ( 2 + α ) C ( α ) ,
C ( α ) = α 2 exp 1 α Γ 3 , 1 α Γ 2 , 1 α
and
D ( α ) = 1 α exp 1 α E i 1 1 α α e .
Here,  E i ( x )  is the exponential integral  E i  and e is the Euler number.

References

  1. Bollerslev, T. Generalized autoregressive conditional heteroskedasticity. J. Econom. 1986, 31, 307–327. [Google Scholar] [CrossRef]
  2. Tsay, R. Analysis of Financial Time Series; Wiley Series in Probability and Statistics; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  3. Machiwal, D.; Jha, M. Hydrologic Time Series Analysis: Theory and Practice; Springer Netherlands: Dordrecht, The Netherlands, 2011. [Google Scholar]
  4. Almeida-Junior, P.M.; Nascimento, A.D.C. G I 0 ARMA process for speckled data. J. Stat. Comput. Simul. 2021, 91, 3125–3153. [Google Scholar] [CrossRef]
  5. Brockwell, P.J.; Davis, R.A. Stationary time series. In Time Series: Theory and Methods; Springer: New York, NY, USA, 1991; pp. 1–41. [Google Scholar]
  6. Chuang, M.D.; Yu, G.H. Order series method for forecasting non-Gaussian time series. J. Forecast. 2007, 26, 239–250. [Google Scholar] [CrossRef]
  7. Tiku, M.L.; Wong, W.K.; Vaughan, D.C.; Bian, G. Time series models in non-normal situations: Symmetric Innovations. J. Time Ser. Anal. 2000, 21, 571–596. [Google Scholar] [CrossRef]
  8. Benjamin, M.A.; Rigby, R.A.; Stasinopoulos, D.M. Generalized autoregressive moving average models. J. Am. Stat. Assoc. 2003, 98, 214–223. [Google Scholar] [CrossRef]
  9. Rocha, A.V.; Cribari-Neto, F. Beta autoregressive moving average models. Test 2009, 18, 529–545. [Google Scholar] [CrossRef]
  10. Bayer, F.M.; Bayer, D.M.; Pumi, G. Kumaraswamy autoregressive moving average models for double bounded environmental data. J. Hydrol. 2017, 555, 385–396. [Google Scholar] [CrossRef]
  11. Weiß, C.H.; Kim, H.Y. Binomial AR(1) processes: Moments, cumulants, and estimation. Statistics 2013, 47, 494–510. [Google Scholar] [CrossRef]
  12. Weiß, C.H. Integer-valued autoregressive models for counts showing underdispersion. J. Appl. Stat. 2013, 40, 1931–1948. [Google Scholar] [CrossRef]
  13. Lewis, P.; McKenzie, E.; Hugus, D.K. Gamma processes. Commun. Stat. Stoch. Model. 1989, 5, 1–30. [Google Scholar] [CrossRef]
  14. Jose, K.K.; Thomas, M.M. Generalized Laplacian distributions and autoregressive processes. Commun. Stat. Theory Methods 2011, 40, 4263–4277. [Google Scholar] [CrossRef]
  15. Bondon, P. Estimation of autoregressive models with epsilon-skew-normal innovations. J. Multivar. Anal. 2009, 100, 1761–1776. [Google Scholar] [CrossRef]
  16. Jodrá, P.; Jiménez-Gamero, M.D.; Alba-Fernández, M.V. On the Muth distribution. Math. Model. Anal. 2015, 20, 291–310. [Google Scholar] [CrossRef]
  17. Sardon, J.P. Prévisions de mortalité et vieillissement démographique. Gerontol. Soc. 1996, 19, 117–136. [Google Scholar] [CrossRef]
  18. Muth, J.E. The Theory and Applications of Reliability; Academic Press, Inc.: New York, NY, USA, 1977; Volume 2, pp. 401–435. [Google Scholar]
  19. Osterrieder, J.; Lorenz, J. A statistical risk assessment of bitcoin and its extreme tail behavior. Ann. Financ. Econ. 2017, 12, 1750003. [Google Scholar] [CrossRef]
  20. Anderson, S.; Branch, T.; Cooper, A.; Dulvy, N. Black-swan events in animal populations. Proc. Natl. Acad. Sci. USA 2017, 114, 3252–3257. [Google Scholar] [CrossRef]
  21. Biçer, C.; Bakouch, H.S.; Biçer, H.D. Inference on parameters of a geometric process with scaled Muth distribution. Fluct. Noise Lett. 2021, 20, 2150006. [Google Scholar] [CrossRef]
  22. Zheng, T.; Xiao, H.; Chen, R. Generalized ARMA models with martingale difference errors. J. Econom. 2015, 189, 492–506. [Google Scholar] [CrossRef]
  23. Scher, V.T.; Cribari-Neto, F.; Pumi, G.; Bayer, F.M. Goodness-of-fit tests for β-ARMA hydrological time series modeling. Environmetrics 2020, 31, e2607. [Google Scholar] [CrossRef]
  24. Woodard, D.B.; Matteson, D.S.; Henderson, S.G. Stationarity of generalized autoregressive moving average models. Electron. J. Stat. 2011, 5, 800–828. [Google Scholar] [CrossRef]
  25. Brockwell, P.; Davis, R. Time Series: Theory and Methods; Springer Series in Statistics; Springer: New York, NY, USA, 2009. [Google Scholar]
  26. Alizadeh, S.; Brandt, M.W.; Diebold, F.X. Range-based estimation of stochastic volatility models. J. Financ. 2002, 57, 1047–1091. [Google Scholar] [CrossRef]
  27. Chiang, M.H.; Wang, L.M. Volatility contagion: A range-based volatility approach. J. Econom. 2011, 165, 175–189. [Google Scholar] [CrossRef]
  28. Chen, C.; Twycross, J.; Garibaldi, J.M. A new accuracy measure based on bounded relative error for time series forecasting. PLoS ONE 2017, 12, e0174202. [Google Scholar] [CrossRef] [PubMed]
  29. Hyndman, R.; Koehler, A.; Ord, J.; Snyder, R. Forecasting with Exponential Smoothing: The State Space Approach; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
Figure 1. The sMuth density curves for  θ = ( α , μ ) . (a μ = 1  and  α =  0.1, 0.3, 0.5, 0.8 and 0.99. (b α = 0.9  and  μ = 0.4 , 0.5 , 1 , 1.5  and 2.
Figure 1. The sMuth density curves for  θ = ( α , μ ) . (a μ = 1  and  α =  0.1, 0.3, 0.5, 0.8 and 0.99. (b α = 0.9  and  μ = 0.4 , 0.5 , 1 , 1.5  and 2.
Mathematics 11 01908 g001
Figure 2. Generated series from the sMuth–ARMA and its pairs (ACF, PACF). (a) For sM–AR(1):  ( δ , ϕ , λ , g ( x ) , u ) = ( 1 , 0.8 , 1 , log W ( x ) , 0.5 ) . (b) ACF–PACF. (c) For sM–MA(1):  ( δ , θ , λ , g ( x ) , u ) = ( 1 , 0.8 , 1 , log W ( x ) , 0.5 ) . (d) ACF–PACF. (e) For sM–ARMA(1,1):  ( δ , ϕ , θ , λ , g ( x ) , u ) = ( 1 , 0.45 , 0.45 , 1 , log ( x ) , 0.5 ) . (f) ACF–PACF.
Figure 2. Generated series from the sMuth–ARMA and its pairs (ACF, PACF). (a) For sM–AR(1):  ( δ , ϕ , λ , g ( x ) , u ) = ( 1 , 0.8 , 1 , log W ( x ) , 0.5 ) . (b) ACF–PACF. (c) For sM–MA(1):  ( δ , θ , λ , g ( x ) , u ) = ( 1 , 0.8 , 1 , log W ( x ) , 0.5 ) . (d) ACF–PACF. (e) For sM–ARMA(1,1):  ( δ , ϕ , θ , λ , g ( x ) , u ) = ( 1 , 0.45 , 0.45 , 1 , log ( x ) , 0.5 ) . (f) ACF–PACF.
Mathematics 11 01908 g002aMathematics 11 01908 g002b
Figure 3. USDT series and its sample ACF and PACF. (a) Real series. (b) (ACF, PACF) of the USDT real series.
Figure 3. USDT series and its sample ACF and PACF. (a) Real series. (b) (ACF, PACF) of the USDT real series.
Mathematics 11 01908 g003
Figure 4. Predicted series and histogram. (a) Observed and fitted sMuth–ARMA(1,1) series. (b) Observed and fitted sMuth–AR(1) series. (c) Observed and fitted ETS series. (d) The Fitted sMuth and Gaussian laws.
Figure 4. Predicted series and histogram. (a) Observed and fitted sMuth–ARMA(1,1) series. (b) Observed and fitted sMuth–AR(1) series. (c) Observed and fitted ETS series. (d) The Fitted sMuth and Gaussian laws.
Mathematics 11 01908 g004
Table 1. Simulation results for the sMuth–ARMA(1,1) process.
Table 1. Simulation results for the sMuth–ARMA(1,1) process.
  ( α , ϕ 1 , ϕ 2 , a ) : link N   α ^ ¯   ϕ ^ 1 ¯   ϕ ^ 2 ¯   a ^ ¯ MSE  ( α ^ ) MSE  ( ϕ ^ 1 ) MSE  ( ϕ ^ 2 ) MSE  ( a ^ )
  ( 0.5 , 0.5 , 0.5 , 1 ) : log ( x ) 490.5590.4930.5080.9920.0680.0740.0740.218
1210.5250.5000.4990.9940.0620.0660.0670.197
4000.5070.4990.5020.9960.0630.0630.0640.190
  ( 0.5 , 0.5 , 0.5 , 1 ) : x 490.5620.4850.5161.0120.0650.0870.0710.242
1210.5230.5020.5060.9990.0650.0690.0650.203
4000.5070.4990.5021.0010.0630.0650.0620.190
  ( 0.5 , 0.5 , 0.5 , 1 ) : log W ( x ) 490.5560.5100.4970.9770.07150.0640.0710.183
1210.5210.5020.4980.9950.0650.0640.0660.189
4000.5080.4990.5010.9990.0610.0630.0640.189
  ( 0.1 , 0.5 , 0.5 , 1 ) : log ( x ) 490.1970.4700.5230.9800.2260.1230.1160.372
1210.1380.4920.5060.9840.2570.1110.1070.330
4000.1130.4960.5030.9970.2740.1030.1040.331
  ( 0.1 , 0.5 , 0.5 , 1 ) : x 490.1870.4760.5161.0150.2350.1470.1220.400
1210.1330.4920.5071.0010.2650.1170.1050.342
4000.1090.4990.5011.0010.2780.1050.1030.332
  ( 0.1 , 0.5 , 0.5 , 1 ) : log   W ( x ) 490.1930.5020.4940.9890.2290.1100.1110.338
1210.1350.4940.5001.0080.2620.1010.1070.346
4000.1110.4980.4991.0030.2750.1040.1030.334
Table 2. Simulation results for the sMuth–AR(1) process.
Table 2. Simulation results for the sMuth–AR(1) process.
( α , ϕ , a ) : link N   α ^ ¯   ϕ ^ ¯   a ^ ¯ MSE  ( α ^ ) MSE  ( ϕ ^ ) MSE  ( a ^ )
  ( 0.5 , 0.5 , 1 ) : log ( x ) 490.5320.4811.0110.0910.0980.196
1210.5200.4951.0030.0840.0870.177
4000.5040.4981.0010.0820.0840.167
  ( 0.5 , 0.5 , 1 ) : x 490.5380.47771.0280.0880.1060.219
1210.5140.4951.0040.0850.0890.182
4000.5060.4981.0020.0840.0850.171
  ( 0.5 , 0.5 , 1 ) : log W ( x ) 490.5390.5010.9960.0880.0860.173
1210.5090.4991.0000.0880.0840.171
4000.5030.5000.9990.0840.0830.168
  ( 0.1 , 0.5 , 1 ) : log ( x ) 490.1680.4721.0070.2850.1500.384
1210.1300.4891.0010.3040.1410.366
4000.1070.4981.0010.3180.1400.358
  ( 0.1 , 0.5 , 1 ) : x 490.1720.4621.0490.2840.1720.459
1210.1280.4811.0230.3060.1430.392
4000.1070.4941.0070.3190.1380.363
  ( 0.1 , 0.5 , 1 ) : log W ( x ) 490.1710.4941.0070.2810.1370.369
1210.1250.4971.0030.3080.1390.362
4000.1100.4971.0060.3180.1380.360
Table 3. Summary of descriptive statistics.
Table 3. Summary of descriptive statistics.
  X 1 : n   X ( 1 ) MedianMeanSDCV%   X ( 3 )   X n : n
0.034070.079920.117320.154410.1877139121.5685%0.166662.78590
Table 4. The estimates and performance of models used in the current application.
Table 4. The estimates and performance of models used in the current application.
Models   α ^   ϕ ^   θ ^ δ ^ / σ 2 ^
sMuth-AR(1)0.27400480.8126926-−0.2844197
(0.03448416)(0.06260973)-(0.13345582)
sMuth–ARMA(1,1)0.303646120.93275200−0.25855030−0.06818648
(0.03622131)(0.05476766)(0.08794887)(0.11136618)
Gaussian-AR(1)0.41380.1541-0.02926
(0.0475)(0.0152)-(0.002130622)
Table 5. Measures of prediction for the time series models used.
Table 5. Measures of prediction for the time series models used.
ModelsMAEMAPEMASE
sMuth–ARMA(1,1)0.065751880.510268050.93168377
sMuth-AR(1)0.066253230.514117390.93878780
ETS0.069387510.566466310.98319957
Gaussian-AR(1)0.069839710.608688360.98960701
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nascimento, A.D.C.; Lima, M.C.S.; Bakouch, H.; Qarmalah, N. Scaled Muth–ARMA Process Applied to Finance Market. Mathematics 2023, 11, 1908. https://doi.org/10.3390/math11081908

AMA Style

Nascimento ADC, Lima MCS, Bakouch H, Qarmalah N. Scaled Muth–ARMA Process Applied to Finance Market. Mathematics. 2023; 11(8):1908. https://doi.org/10.3390/math11081908

Chicago/Turabian Style

Nascimento, Abraão D. C., Maria C. S. Lima, Hassan Bakouch, and Najla Qarmalah. 2023. "Scaled Muth–ARMA Process Applied to Finance Market" Mathematics 11, no. 8: 1908. https://doi.org/10.3390/math11081908

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop