Next Article in Journal
On the Verification of the Pedestrian Evacuation Model
Next Article in Special Issue
AutoNowP: An Approach Using Deep Autoencoders for Precipitation Nowcasting Based on Weather Radar Reflectivity Prediction
Previous Article in Journal
An Intrinsic Value Approach to Valuation with Forward–Backward Loops in Dividend Paying Stocks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Online Learning for the Autoregressive Integrated Moving Average Models

1
Faculty of Electrical Engineering and Computer Science, Technische Universität Berlin, Ernst-Reuter-Platz 7, 10587 Berlin, Germany
2
GT-ARC Gemeinnützige GmbH, Ernst-Reuter-Platz 7, 10587 Berlin, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(13), 1523; https://doi.org/10.3390/math9131523
Submission received: 19 April 2021 / Revised: 19 May 2021 / Accepted: 24 June 2021 / Published: 29 June 2021
(This article belongs to the Special Issue Computational Optimizations for Machine Learning)

Abstract

:
This paper addresses the problem of predicting time series data using the autoregressive integrated moving average (ARIMA) model in an online manner. Existing algorithms require model selection, which is time consuming and unsuitable for the setting of online learning. Using adaptive online learning techniques, we develop algorithms for fitting ARIMA models without hyperparameters. The regret analysis and experiments on both synthetic and real-world datasets show that the performance of the proposed algorithms can be guaranteed in both theory and practice.

1. Introduction

The autoregressive integrated moving average (ARIMA) model is an important tool for time series analysis [1], and has been successfully applied to a wide range of domains including the forecasting of household electric consumption [2], scheduling in smart grids [3], finance [4], and environment protection [5]. It specifies that the values of a time series depend linearly on their previous values and error terms. In recent years, online learning (OL) methods have been applied to estimate the univariate [6,7] and multivariate [8,9] ARIMA models for their efficiency and scalability. These methods are based on the fact that any ARIMA model can be approximated by a finite dimensional autoregressive (AR) model, which can be fitted incrementally using online convex optimization algorithms. However, to guarantee accurate predictions, these methods require a proper configuration of hyperparameters, such as the diameter of the decision set, the learning rate, the order of differencing, and the lag of the AR model. Theoretically, these hyperparameters need to be set according to prior knowledge about the data generation, which is impossible to obtain. In practice, the hyperparameters are usually tuned to optimize the goodness of fit on the unseen data, which requires numerical simulation (e.g., cross-validation) on a previously collected dataset. The numerical simulation is notoriously expensive, since it requires multiple training runs for each candidate hyperparameter configuration. Furthermore, a previously collected dataset containing ground truth is needed for validation of the fitted model, which is unsuited for the online setting. Unfortunately, the expensive tuning process needs to be regularly repeated if the statistical properties of the time series change over time in an unforeseen way.
Given a new problem of predicting time series values, it appears that tuning the hyperparameters of the online algorithms can negate the benefits of the online setting. This paper addresses this problem in the online learning framework by proposing new parameter-free algorithms for learning ARIMA models, while their performance can still be guaranteed in both theory and practice. A naive attempt for this would be to directly apply parameter-free online convex optimization (PF-OCO) algorithms to the AR approximation. However, the theoretical performance of the AR approximation and the parameter-free algorithms rely on the bounded gradient vectors of the loss function, which is unreasonable for the widely used squared error with an unbounded domain.
The key contribution of this paper is the design of online learning algorithms for ARIMA models, avoiding regular and expensive hyperparameter tuning without damaging the power of the models. Our algorithms update the model incrementally with a computational complexity that is linearly related to the size of the model parameters and the number of candidate models in each iteration. To obtain a solid theoretical foundation, we first show that, for any locally Lipschitz-continuous function, ARIMA models with fixed order of differencing can be approximated using an AR model of the same order for a large enough lag. Based on this, new algorithms are proposed for learning the AR model adaptively without requiring any prior knowledge about the model parameters. For Lipschitz-continuous loss functions, we apply a new algorithm based on the adaptive follow the regularized leader (FTRL) framework [10] and show that our algorithm achieves a sublinear regret bound depending on the data sequence and the Lipschitz constant. A special treatment on the commonly used squared error is required due to its non-Lipschitz continuity. To obtain a data-dependent regret bound, we combine a polynomial regularizer [11] with the adaptive FTRL framework. Finally, to find the proper order and lag of the AR model in an online manner, multiple AR models are simultaneously maintained, and an adaptive hedge algorithm is applied to aggregate their predictions. In the previous attempts [12,13] to solve this online model selection (OMS) problem, the exponentiated gradient (EG) algorithm has been directly applied to aggregate the predictions, which not only requires tuning the learning rate, but also yields a regret bound depending on the loss incurred by the worst model. Our adaptive hedge algorithm is parameter-free and guarantees a regret bound depending on the time series sequence. Table 1 provides a comparison of the online learning algorithms applied to the learning of the ARIMA models. In addition to the theoretical analysis, we also demonstrate the performance of the proposed algorithm using both synthetic and real-world datasets.
The rest of the paper is organized as follows. Section 2 reviews the existing work on the subject. The notation, learning model, and formal description of the problem are introduced in Section 3. Next, we present and analyze our algorithms in Section 4. Section 5 demonstrates the empirical performance of the proposed methods. Finally, we conclude our work with some future research directions in Section 6.
Algorithm 1 ARIMA-AdaFTRL.
Input: L 1 > 0
Initialize θ 1 , i arbitrarily, η 1 , i = 0 , G i , 0 = 0 for i = 1 , , m
for t = 1 to T do
  for  i = 1 to m do
     G i , t = max { G i , t 1 , d X t i 2 }
     η i , t = θ i , 1 F + s = 1 t 1 g i , s F 2 + ( L t G i , t ) 2
    if  η i , t 0  then
      γ i , t = θ i , t η i , t
    else
      γ i , t = 0
    end if
  end for
  Play X ˜ t ( γ t )
  Observe X t and h t l t ( X ˜ t ( γ t ) )
   L t + 1 = max { L t , g t 2 }
  for  i = 1 to m do
     g i , t = g t d X t i
     θ i , t + 1 = θ i , t g i , t
  end for
end for
Algorithm 2 ARIMA-AdaFTRL-Poly.
Input: G 0 > 0
Initialize θ 1 arbitrarily, G 1 = max { G 0 , d X 0 2 , , d X m + 1 2 }
for t = 1 to T do
   η t = θ 1 F + s = 1 t 1 d X s x s F 2 + ( G t x t 2 ) 2
   λ t = s = 1 t x s 2 4
  if  θ t F 0  then
    Select c 0 satisfying λ t c 3 + η t c = θ t F
     γ t = c θ t θ t F
  else
     γ t = 0
  end if
  Play X ˜ t ( γ t )
  Observe X t and g t = γ t x t d X t
   G t + 1 = max { G t , d X t 2 }
   θ t + 1 = θ t g t x t
end for
Algorithm 3 ARIMA-AO-Hedge.
Input: predictor A 1 , , A K , d
Initialize θ k , 1 = 0 , η 1 = 0 for i = 1 , , K
for t = 1 to T do
  Get prediction X ˜ t i from A k for i = 1 , , K
  Set Y t = i = 0 d 1 i X t 1
  Set h i , t = l ( Y t , X ˜ t i ) for i = 1 , , K
  if  η 1 = 0  then
    Set w i , t = 1 for some i arg max j { 1 , , K } h j , t
  else
    Set w i , t = exp ( η t 1 ( θ i , t h i , t ) ) i = 1 K exp ( η t 1 ( θ i , t h i , t ) ) for i = 1 , , K
  end if
  Predict X ˜ t = i = 1 K w i , t X ˜ t i
  Observe X t , update A i , and set z i , t = l ( X t , X ˜ t i ) for i = 1 , , K
   θ t + 1 = θ t z t
   η t + 1 = 1 2 log K s = 1 t h t z t 2
end for

2. Related Work

An ARIMA model can be fitted using statistical methods such as recursive least square and maximum likelihood estimation, which are not only based on strong assumptions such as the Gaussian distributed noise terms [18], linear dependencies [19], and data generated by a stationary process [20], but also require solution of non-convex optimization problems [21]. Although these assumptions can be relaxed by considering non-Gaussian noise [22,23], non-stationary processes [24], or a convex relaxation [21], the pre-trained models still cannot deal with concept drift [7]. Moreover, retraining is time consuming and memory intensive, especially for large-scale datasets. The idea of applying regret minimization techniques to autoregressive moving average (ARMA) prediction was first introduced in [6]. The authors propose online algorithms incrementally producing predictions close to the values generated by the best ARMA model. This idea was extended to ARIMA ( p , q , d ) models in [7] by learning the AR ( m ) model of the higher-order differencing of the time series. Further extensions to multiple time series can be found in [8,9], while the problem of predicting time series with missing data was addressed in [25].
In order to obtain accurate predictions, the lag of the AR model and the order of differencing have to be tuned, which has been well studied in the offline setting. In some textbooks [20,26,27], Akaike’s Information Criterion (AIC) and the Bayesian Information Criterion (BIC) are recommended for this task. Both require prior knowledge and strong assumptions about the variance of the noise [20], and are time and space consuming as they require numerical simulation such as cross-validation on previously collected datasets. Nevertheless, given a properly selected lag m and order d, online convex optimization techniques such as online Newton step (ONS) or online gradient descent (OGD) can be applied to fitting the model in the regret minimization framework [6,7,8,9]. However, both algorithms introduce additional hyperparameters to control the learning rate and numerical stability.
The idea of selecting hyperparameters for online time series prediction was proposed in [12,13]. Regarding the online AR predictor with different lags as experts, the authors aggregate over predictors by applying a multiplicative weights algorithm for prediction with expert advice. The proposed algorithm is not optimal for time series prediction, since the regret bound of the chosen algorithm depends on the largest loss incurred by the experts [28]. Furthermore, each individual expert still requires that the parameters are taken from a compact decision set, the diameter of which needs to be tuned in practice. A series of recent works on parameter-free online learning have provided possibilities of achieving sublinear regret without prior information on the decision set. In [14], the unconstrained online learning problem is modeled as a betting game, based on which a parameter-free algorithm is developed. The algorithm was further extended in [15], so a better regret bound can be achieved for strongly convex loss functions. However, the coin betting algorithm requires that the gradient vectors are normalized, which is unrealistic for unbounded time series and the squared error loss. In [16,17], the authors introduced parameter-free algorithms without requiring normalized gradient vectors. Unfortunately, the regret upper bounds of the proposed algorithms depend on the norm of the gradient vectors, which could be extremely large in our setting.
The main idea of the current work is based on the combination of the adaptive FTRL framework [10] and the idea of handling relative Lipschitz continuous functions [11], which makes it possible to devise an online algorithm with a data-dependent regret upper bound. To aggregate the results, an adaptive optimistic algorithm is proposed, such that the overall regret depends on the data sequence instead of the worst-case loss.

3. Preliminary and Learning Model

Let X t denote the value observed at time t of a time series. We assume that X t is taken from a finite dimensional real vector space X with norm · . We denote by L ( X , X ) the vector space of bounded linear operators from X to X and α op = sup x X , x 0 α x x the corresponding operator norm. An AR ( p ) model is given by
X t = i = 1 p α i X t i + ϵ t ,
where α i L ( X , X ) is a linear operator and ϵ t X is an error term. The ARMA ( p , q ) model extends the AR ( p ) model by adding a moving average (MA) component as follows:
X t = i = 1 p α i X t i + i = 1 q β i ϵ t i + ϵ t ,
where ϵ t X is the error term and β i L ( X , X ) . We define the d-th order differencing of the time series as d X t = d 1 X t d 1 X t 1 for d 1 and 0 X t = X t . The ARIMA ( p , q , d ) model assumes that the d-th order differencing of the time series follows an ARMA ( p , q ) model. In this section, this general setting suffices for introducing the learning model. In the following sections, we fix the basis of X to obtain implementable algorithms, for which different kinds of norms and inner products for vectors and matrices are needed. We provide a table of required notation in Appendix C.
In this paper, we consider the setting of online learning, which can be described as an iterative game between a player and an adversary. In each round t of the game, the player makes a prediction X ˜ t . Next, the adversary chooses some X t and reveals it to the player, who then suffers the loss l ( X t , X t ˜ ) for some convex loss function l : X × X R . The ultimate goal is to design a strategy for the player to minimize the cumulative loss t = 1 T l ( X t , X t ˜ ) of T rounds. For simplicity, we define
l t : X R , X l ( X t , X ) .
In classical textbooks about time series analysis, the signal is assumed to be generated by a model, based on which the predictions are made. In this paper, we make no assumptions on the data generation. Therefore, minimizing the cumulative loss is generally impossible. An achievable objective is to keep a possibly small regret of not having chosen some ARIMA ( p , q , d ) model to generate the prediction X ˜ t . Formally, we denote by X ˜ t ( α , β ) the prediction using the ARIMA ( p , q , d ) model parameterized by α and β , given by (in this paper, we do not directly address the problem of the cointegration, where the third term should be applied to a low-rank linear operator):
X ˜ t ( α , β ) = i = 1 p α i d X t i + i = 1 q β i ϵ t i + i = 0 d 1 i X t 1 .
The cumulative regret of T rounds is then given by
R T ( α , β ) = t = 1 T l t ( X ˜ t ) t = 1 T l t ( X ˜ t ( α , β ) ) .
The goal of this paper is to design a strategy for the player such that the cumulative regret grows sublinearly in T. In the ideal case, in which the data are actually generated by an ARIMA process, the prediction generated by the player yields a small loss. Otherwise, the predictions are always close to those produced by the best ARIMA model, independent of the data generation. Following the adversarial setting in [6], we allow the sequences { X t } , { ϵ t } and the parameters α , β to be selected by the adversary. Without any restrictions on the model, this is no different than the impossible task of minimizing the cumulative loss, since ϵ t 1 can always be selected such that X t = X ˜ t ( α , β ) holds for all t. Therefore, we make the following assumptions throughout this paper:
Assumption 1.
X t = ϵ t + X t ˜ ( α , β ) , and there is some R > 0 such that ϵ t R for all t = 1 , T .
Assumption 2.
The coefficients β i satisfy i = 1 q β i op 1 ϵ for some ϵ > 0 .
Since we are interested in competing against predictions generated by ARIMA models, we assume that ϵ t is selected as if X t is generated by the ARIMA process. Furthermore, we assume the norm ϵ t is upper bounded within T iterations. Assumption 2 is a sufficient condition for the MA component to be invertible, which prevents it from going to infinity as t [27].
Our work is based on the fact that we can compete against an ARIMA ( p , q , d ) model by taking predictions from an AR ( m ) model of the d-th order differencing for large enough m, which is shown in the following lemma, the proof of which can be found in Appendix A.
Lemma 1.
Let { X t } , { ϵ t } , α, and β be as assumed in Assumptions 1 and 2. Then there is some γ L ( X , X ) m with m q log T log 1 1 ϵ + p such that
d X ˜ t ( γ ) d X ˜ t ( α , β ) ( 1 ϵ ) t q R + 2 R T
holds for all t = 1 T , where we define d X ˜ t ( γ ) = i = 1 m γ i d X t i .
As can be seen from the lemma, a prediction X ˜ t ( γ ) generated by the process
X ˜ t ( γ ) = i = 1 m γ i d X t i + i = 0 d 1 i X t 1
is close to the prediction X ˜ t ( α , β ) generated by the ARIMA process. In the previous works [6,7], the loss function l t is assumed to be Lipschitz continuous to control the difference of loss incurred by the approximation. In general, this does not hold for squared error. However, from Assumption 1 and Lemma 1, it follows that both X ˜ t ( α , β ) and X ˜ t ( γ ) lie in a compact set around X t with a bounded diameter. Given the convexity of l, which is local Lipschitz continuous in the compact convex domain, we obtain a similar property:
l ( X t , X ˜ t ( γ ) ) l ( X t , X ˜ t ( α , β ) ) L ( X t ) d X ˜ t ( γ ) d X t ˜ ( α , β ) ,
where L ( X t ) is some constant depending on X t . For squared error, it is easy to verify that the Lipschitz constant depends on d X t , the boundedness of which can be reasonably assumed. To avoid extraneous details, we simply add the third assumption:
Assumption 3.
Define set X t = { X X | X X t 4 R } . There is a compact convex set X t = 1 T X t , such that l t is L-Lipschitz continuous in X for t = 1 , T .
The next corollary shows that the losses incurred by the ARIMA and its approximation are close, which allows us to take predictions from the approximation.
Corollary 1.
Let { X t } , { ϵ t } , α, β, and l be as assumed in Assumptions 1–3. Then there is some γ L ( X , X ) m with m q log T log 1 1 ϵ + p , such that
t = 1 T l t ( X ˜ t ( γ ) ) l t ( X ˜ t ( α , β ) ) L R ( 1 1 ( 1 ϵ ) 1 q + 2 )
holds for all t = 1 T .
Proof. 
It follows from Assumption 1 and Lemma 1 that X t ˜ ( γ ) , X t ˜ ( α , β ) X holds for all t = 1 , T . Together with Assumption 3, we obtain
t = 1 T ( l t ( X ˜ t ( γ ) ) l t ( X ˜ t ( α , β ) ) ) L t = 1 T X ˜ t ( γ ) X ˜ t ( α , β ) .
Applying Lemma 1, we obtain the claimed result.  □

4. Algorithms and Analysis

From Corollary 1, it follows clearly that an ARIMA(p,q,d) model can be approximated by an integrated AR model with large enough m. However, neither the order of differencing d nor the lag m is known. To circumvent tuning them using a previously collected dataset, we propose a framework with a two-level hierarchical construction, which is described in Algorithm 4.
Algorithm 4 Two-level framework.
Input: K instances of the slave algorithm A 1 , , A K . An instance of master algorithm M .
for t = 1 to T do
  Get X ˜ t i from each A i
  Get w t Δ K from M             ▹ Δ K is the standard K-simplex
  Integrate the prediction: X ˜ t = i = 1 K w t i X ˜ t i
  Observe X t
  Define z t R K with z i , t = l t ( X ˜ t i )
  Update A i using z i , t for i = 1 , , K
  Update M using z t
end for
The idea is to maintain a master algorithm M and a set of slave algorithms { A m | m = 1 , , K } . At each step t, the master algorithm receives predictions X ˜ t k from A k for k = 1 , , K . Then it comes up with a convex combination X ˜ t = i = 1 K w t i X ˜ t i for some w t Δ in the simplex. Next, it observes X t and computes the loss l t ( X t k ( γ ) ) for each slave A k , which is then used to update A k and w t + 1 . Let { X ˜ t k } be the sequence generated by some slave k. We define the regret of not having chosen the prediction generated by slave k as
R T ( k ) = t = 1 T l t ( i = 1 K w t i X ˜ t i ) t = 1 T l t ( X ˜ t k ) ,
and the regret of the slave k
R T ( A k ) = t = 1 T l t ( X ˜ t k ) t = 1 T l t ( X ˜ t ( γ k ) ) ,
where X ˜ t ( γ k ) is the prediction generated by an integrated AR model parameterized by γ k . Let A k be some slave. Then the regret of this two-level framework can obviously be decomposed as
R T ( α , β ) = R T ( k ) + R T ( A k ) + t = 1 T l t ( X ˜ t ( γ k ) ) t = 1 T l t ( X ˜ t ( α , β ) ) . Corollary   1
For γ k , α , and β satisfying the condition in Corollary 1 (this is not a condition of having a correct algorithm—with more slaves, there are more α , β satisfying the condition; we increase the freedom of the model by increasing the number of slaves), the marked term above is upper bounded by a constant, that is,
t = 1 T l t ( X ˜ t ( γ k ) ) t = 1 T l t ( X ˜ t ( α , β ) ) O ( 1 ) .
If the regret of the master and the slaves grow sublinearly in T, we can achieve an overall sublinear regret upper bound, which is formally described in the following corollary.
Corollary 2.
Let A i be an online learning algorithm against an AR ( m i ) model parameterized by γ i for i = 1 , , K . For any ARIMA model parameterized by α and β, if there is a k { 1 , , K } such that X ˜ t ( γ k ) , X ˜ t ( α , β ) and { X t } satisfy Assumptions 1–3, then running Algorithm 4 with M and A 1 , , A K guarantees
t = 1 T ( l t ( X ˜ t ) l t ( X ˜ t ( α , β ) ) ) R T ( k ) + R T ( A k ) + O ( 1 ) .
Next, we design and analyze parameter-free algorithms for the slaves and the master.

4.1. Parameter-Free Online Learning Algorithms

4.1.1. Algorithms for Lipschitz Loss

Given fixed m and d, an integrated A R ( m ) model can be treated as an ordinary linear regression model. In each iteration t, we select γ t = ( γ 1 , t , , γ m , t ) L ( X , X ) m and make prediction
X ˜ t ( γ t ) = i = 1 m γ i , t d X t i + i = 0 d 1 i X t 1 .
Since l t is convex, there is some subdifferential g t l t ( X ˜ t ( γ t ) ) such that
l t ( X ˜ t ( γ t ) ) l t ( X ˜ t ( γ ) ) g t ( i = 1 m ( γ i , t γ i ) d X t i ) ,
for all γ L ( X , X ) m . Define g i , t : L ( X , X ) R , v g t ( v d X t i ) . The regret can be further upper bounded by
t = 1 T l t ( X ˜ t ( γ t ) ) l t ( X ˜ t ( γ ) ) t = 1 T i = 1 m g i , t ( γ i , t γ i ) .
Thus, we can cast the online linear regression problem to an online linear optimization problem. Unlike the previous work, we focus on the unconstrained setting, where γ t is not picked from a compact decision set. In this setting, we can apply an FTRL algorithm with an adaptive regularizer. To obtain an efficient implementation, we fix a basis for both X and X * . Now we can assume X = X * = R n and work with the matrix representation of γ L ( X , X ) . It is easy to verify that (2) can be rewritten as
t = 1 T l t ( X ˜ t ( γ t ) ) l t ( X ˜ t ( γ ) ) t = 1 T i = 1 m g t d X t i , γ i , t γ i F ,
where A , B F = tr ( A B ) is the Frobenius inner product. It is well known that the Frobenius inner product can be considered as a dot product of vectorized matrices, with which we obtain a simple first-order (the computational complexity per iteration depends linearly on the dimension of the parameter, i.e., O ( n 2 m ) ) algorithm described in Algorithm 1.
The cumulative regret of Algorithm 1 can be upper bounded using the following theorem.
Theorem 1.
Let { X t } be any sequence of vectors taken from X . Algorithm 1 guarantees
t = 1 T l t ( X ˜ t ( γ t ) ) l t ( X ˜ t ( γ ) ) i = 1 m ( γ i F 2 L T + 1 2 + L T + 1 + L T + 1 2 L 1 ) t = 1 T d X t i 2 2 + i = 1 m ( L T + 1 G i , T + 1 + θ i , 1 F ) γ i F 2 + θ i , 1 F 2 .
For an L-Lipschitz loss function l t , in which L T + 1 is upper bounded by L, we obtain a sublinear regret upper bound depending on the sequence of d-th order differencing { d X t } . In case L is known, we can set L 0 = L , otherwise picking L 0 arbitrarily from a reasonable range (e.g., L 0 = 1 ) would not have a devastating impact on the performance of the algorithms.

4.1.2. Algorithms for Squared Errors

For the commonly used squared error given by
l t ( X ˜ t ( γ t ) ) = 1 2 X ˜ t ( γ t ) X t 2 2 ,
it can be verified that g t can be represented as a vector
g t = i = 1 m γ i , t d X t i d X t
for all t. Existing algorithms, which have a regret upper bound depending on g t 2 , could fail since g t 2 can be set arbitrarily large due to the adversarially selected data sequence X 1 , , X t . To design a parameter-free algorithm for the squared error, we equip FTRL with a time-varying polynomial regularizer described in Algorithm 2.
Define
x t = d X t 1 d X t m
and consider the matrix representation γ t = γ 1 , t γ m , t . Then we have g t = γ t x t d X t , and the upper bound of the regret can be rewritten as
t = 1 T l t ( X ˜ t ( γ t ) ) l t ( X ˜ t ( γ ) ) t = 1 T ( γ t x t d X t ) x t , γ t γ F .
The idea of Algorithm 2 is to run the FTRL algorithm with a polynomial regularizer
λ t 4 γ F 4 + η t 2 γ F 2 ,
for increasing sequences { λ t } and { η t } , which leads to updating rule given by
γ t = arg max γ L ( X , X ) m θ t , γ F λ t 4 γ F 4 η t 2 γ F 2 = c θ t θ t F ,
for c satisfying λ t c 3 + η t c = θ t F . Since we have λ t 0 and η t > 0 for θ 1 0 , c exists and has a closed-form expression. The computational complexity per iteration has a linear dependency on the dimension of L ( X , X ) m . The following theorem provides a regret upper bound of Algorithm 2.
Theorem 2.
Let { X t } be any sequence of vectors taken from X and
l t ( X ˜ t ( γ ) ) = 1 2 X t X ˜ t ( γ ) 2 2 = 1 2 d X t d X ˜ t ( γ ) 2 2
be the squared error. We define x t = d X t 1 d X t m and γ = γ 1 γ m , the matrix representation of γ 1 , γ m L ( X , X ) . Then, Algorithm 2 guarantees
t = 1 T ( l t ( X ˜ t ( γ t ) ) l t ( X ˜ t ( γ ) ) ) ( m G T + 1 2 + θ 1 F ) γ F 2 2 + θ 1 F + ( 1 + γ F 4 4 ) t = 1 T x t 2 4 + ( 1 + G T + 1 G 0 + γ F 2 2 ) t = 1 T d X t x t F 2
for all γ L ( X , X ) m .
For squared error, Algorithm 2 does not require a compact decision set and ensures a sublinear regret bound depending on the data sequence. Similar to Algorithm 1, one can set G 0 according to the prior knowledge about the bounds of the time series. Alternatively, we can simply set G 0 = 1 to obtain a reasonable performance.

4.2. Online Model Selection Using Master Algorithms

The straightforward choice of the master algorithm would be the exponentiated gradient algorithm for prediction with expert advice. However, this algorithm requires tuning of the learning rate and losses bounded by a small quantity, which can not be assumed for our case. The AdaHedge algorithm [29] solves these problems. However, it yields a worst-case regret bound depending on the largest loss observed, which could be much worse compared to a data-dependent regret bound.
Our idea is based on the adaptive optimistic follow the regularized leader (AO-FTRL) framework [10]. Given a sequence of hints { h t } and loss vectors { z t } , AO-FTRL guarantees a regret bound related to t = 1 T z t h t t 2 for some time-varying norm · t . In our case, where the loss incurred by a slave is given by l ( X t , X t k ˜ ) at iteration t, we simply choose h k , t = l ( i = 0 d 1 i X t 1 , X t k ˜ ) . If l is L-Lipschitz in its first argument, then we have | z k , t h k , t | L d X t , which leads to a data-dependent regret. The obtained algorithm is described in Algorithm 3. Its regret is upper bounded by the following theorem, the proof of which is provided in Appendix B.
Theorem 3.
Let { X ˜ t } , { X ˜ t k } , { z t } , { h t } , and { w t } be as generated in Algorithm 3. Assume l is L-Lipschitz in its first argument and convex in its second argument. Then for any sequence { X t } and slave algorithm A k , we have
R T ( k ) ( 2 log K + 8 log K ) t = 1 T L 2 d X t 2 2 .
By Corollary 2, combining Algorithm 3 with Algorithms 1 or 2 guarantees a data-dependent regret upper bound sublinear in T. Note that there is an input parameter d for Algorithm 3, which can be adjusted according to the prior knowledge of the dataset such that d X t 2 2 can be bounded by a small quantity. In case no prior knowledge can be obtained, we can set d to the maximal order of differencing used in the slave algorithms. Arguably, the Lipschitz continuity is not a reasonable assumption for squared error with unbounded domain. With a bounded d X t 2 2 , we can assume that the loss function is locally Lipschitz, but with a Lipschitz constant depending on the prediction. In the next section, we show the performance of Algorithm 3 in combination with Algorithms 1 and 2 in different experimental settings.

5. Experiments and Results

In this section, we carry out experiments on both synthetic and real-world data to show that the proposed algorithms can generate promising predictions without tuning hyperparameters.

5.1. Experiment Settings

The synthetic data was generated randomly. We run 20 trials for each synthetic experiment and average the results. For numerical stability, we scale the real-world data down so that the values are between 0 and 10. Note that the range of the data are not assumed or used in the algorithms.
Setting 1: Sanity Check
For a sanity check, we generate a stationary 10-dimensional ARIMA ( 5 , 2 , 1 ) process using randomly drawn coefficients.
Setting 2: Time-Varying Parameters
Aimed at demonstrating the effectiveness of the proposed algorithm in the non-stationary case, we generate the non-stationary 10-dimensional ARIMA ( 5 , 2 , 1 ) process using time-varying parameters. We draw α 1 , α 2 , and β 1 , β 2 randomly and independent, and generate data at iteration t with the ARIMA ( 5 , 2 , 1 ) model parameterized by α t = t 10 4 α 1 + ( 1 t 10 4 ) α 2 and β t = t 10 4 β 1 + ( 1 t 10 4 ) β 2 .
Setting 3: Time-Varying Models
To get more adversarially selected time series values, we generate the first half of the values using a stationary 10-dimensional ARIMA ( 5 , 2 , 1 ) model and the second half of the values using a stationary 10-dimensional ARIMA ( 5 , 2 , 0 ) model. The model parameters are drawn randomly.
Stock Data: Time Series with Trend
Following the experiments in [8], we collect the daily stock prices of seven technology companies from Yahoo Finance together with the S&P 500 index for over twenty years, which has an obvious increasing trend and is believed to exhibit integration.
Google Flu Data: Time Series with Seasonality
We collect estimates of influenza activity of the northern hemisphere countries, which has an obvious seasonal pattern. In the experiment, we examine the performance of the algorithms for handling regular and predictable changes that occur over a fixed period.
Electricity Demand: Trend and Seasonality
In this setting, we collect monthly load, gross electricity production, net electricity consumption, and gross demand in Turkey from 1976 to 2010. The dataset contains both trend and seasonality.

5.2. Experiments for the Slave Algorithms

We first fix d = 1 and m = 16 and compare our slave algorithms with ONS and OGD from [9] for squared error l t ( X ˜ t ) = 1 2 X t X ˜ t 2 2 and Euclidean distance l t ( X ˜ t ) = X t X ˜ t 2 . ONS and OGD stack and vectorize the parameter matrices, and incrementally update the vectorized parameter respectively using the following rules
w t + 1 = Π W ( w t η ( s = 1 t g t g t + λ I ) 1 g t )
and
w t + 1 = Π W ( w t η g t ) ,
where g t is the vectorized gradient at step t, W is the decision set satisfying sup u W u 2 c , and the operator Π W ( v ) projects v into W . We select a list of candidate values for each hyperparameter, evaluate their performance on the whole dataset, and select the configuration with the best performance for comparison. Since the synthetic data are generated randomly, we average the results over 20 trials for stability. The corresponding results are shown in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 (to amplify the differences of the algorithms, we use l o g plots for the y-axis for all settings; for the synthetic datasets, we also use l o g plot for the x-axis, so that the behavior of the algorithms in the first 1000 steps can be better observed). To show the impact of the hyperparameters on the performance of the baseline algorithm, we also plot their performance using sub-optimal configurations. Note that since the error term ϵ t cannot be predicted, an ideal predictor would suffer an average error rate of at least ϵ t 2 2 and ϵ t 2 for the two kinds of loss function. This is known for the synthetic datasets and plotted in the figures.
In all settings, both AdaFTRL and AdaFTRL-Poly have a performance on par with well-tuned OGD and ONS, which can have extremely bad performance using sub-optimal hyperparameter configurations. In the experiments using synthetic datasets, AdaFTRL suffers large loss at the beginning while generating accurate predictions after 1000 iterations. The relative performances of the proposed algorithms after the first 1000 iterations compared to the best tuned baseline algorithms are plotted in Appendix D. AdaFTRL-Poly has more stable performance compared to AdaFTRL. In the experiment with Google Flu data, all algorithms suffer huge losses around iteration 300 due to an abrupt change in the dataset. OGD and ONS with sub-optimal hyperparameter configurations, despite good performance for the first half of the data, generate very inaccurate predictions after the abrupt change in the dataset. This could lead to a catastrophic failure in practice, when certain patterns do not appear in the dataset collected for hyperparameter tuning. Our algorithms are more robust against this change and perform similarly to OGD and ONS with optimal hyperparameter configurations.

5.3. Experiments for Online Model Selection

The performance of the two-level framework and Algorithm 3 for online model selection is demonstrated in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12. We simultaneously maintain 96 AR ( m ) models of d-th-order differencing for m = 1 , 32 and d = 0 , 2 , which are updated by Algorithms 1 and 2 for squared error and Euclidean distance, respectively. The predictions generated by the AR models are aggregated using Algorithm 3 and the aggregation algorithm (AA) introduced in [13] with learning rate set to T . We compare the average losses incurred by the aggregated predictions with those incurred by the best AR model. To show the impact of m and d, we also plot the average loss of some other sub-optimal AR models.
In all settings, AO-Hedge outperforms AA, although the differences are very slight in some of the experiments. We would like to stress again that the choice of the hyperparameters has a great impact on the performance of the AR model. In settings 1–3, the AR model with 0-th-order differencing has the best performance, although the data are generated using d = 1 , which suggests that the prior knowledge about the data generation may not be helpful for the model selection in all cases. The experimental results also show that AO-Hedge has a performance similar to the best AR model.

6. Conclusions

We proposed algorithms for fitting ARIMA models in an online manner without requiring prior knowledge or tuning hyperparameters. We showed that the cumulative regret of our method grows sublinearly with the number of iterations and depends on the values of the time series. The comparison study on both synthetic and real-world datasets suggests that the proposed algorithms have a performance on par with the well-tuned state-of-the-art algorithms.
There are still several remaining issues that we want to address in future research. Firstly, it would be interesting to also develop a parameter-free algorithm for the cointegrated vector ARMA model. Secondly, we believe that the strong assumption on the β coefficient can be relaxed for multi-dimensional time series by generalizing Lemma 2 in [7]. Furthermore, we are also interested in applying online learning to other time series models such as the (generalized) ARCH model [30]. Finally, the proposed algorithms need to be empirically analyzed using more real-world datasets and loss functions, and compared with more recent predictive models such as recurrent neural networks and the models combining neural networks and ARIMA models [31].

Author Contributions

Conceptualization, W.S.; methodology, W.S. and L.F.R.; validation, W.S., L.F.R., and F.S.; formal analysis, W.S.; investigation, W.S. and L.F.R.; writing—original draft preparation, W.S. and L.F.R.; writing—review and editing, W.S., L.F.R., F.S., and S.A.; visualization, L.F.R.; supervision, F.S. and S.A. All authors have read and agreed to the published version of the manuscript.

Funding

We acknowledge support by the German Research Foundation and the Open Access Publication Fund of TU Berlin.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The source code for generating the synthetic data set, the implementation of the algorithms, and the detailed information about our experiments are available on GitHub: https://github.com/OnlinePredictorTS/AOLForTimeSeries (accessed on March 2021). The stock data are collected from https://finance.yahoo.com/ (accessed on March 2021). The Google Flu data are available in https://github.com/datalit/googleflutrends/ (accessed on March 2021). The detailed information about the electricity demand can be found in [32].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

We prove Lemma 1 in this section. Consider the ARIMA model given by
d X t ( α , β ) = i = 1 p α i d X t i + i = 1 q β i ϵ t i + ϵ t
with d X t ( α , β ) = d X t for t 0 . Let
X t ( α , β ) = d X t ( α , β ) + i = 0 d 1 i X t 1
be the t-th value generated by the ARIMA process. To prove Lemma 1, we generalize the proof provided in [6]. To remove the MA component, we first recursively define a growing process of the d-th-order differencing
d X t ( α , β ) = i = 1 p α i d X t i + i = 1 q β i ( d X t i d X t i ( α , β ) )
with d X t ( α , β ) = d X t for t 0 . Let
X t ( α , β ) = d X t ( α , β ) + i = 0 d 1 i X t 1
be the t-th value generated by this process.
The next lemma shows that it approximates an ARIMA ( p , q , d ) process.
Lemma A1.
For any α, β, and { ϵ t } satisfying Assumptions 1 and 2, we have, for t = 1 , , T ,
X t ( α , β ) X ˜ t ( α , β ) ( 1 ϵ ) t q R .
Proof. 
First of all, we have
X t ( α , β ) X ˜ t ( α , β ) = d X t ( α , β ) d X ˜ t ( α , β ) = i = 1 q β i ( d X t i d X t i ( α , β ) ϵ t i )
for t 0 . Define Y t = d X t d X t ( α , β ) ϵ t . W.l.o.g. we can assume ϵ t R for t 0 . Next, we prove by induction on t that Y τ ( 1 ϵ ) τ q R holds for all τ t . For the induction basis, we have
Y τ = ϵ t R
for all τ 0 . We assume the claim holds for some t, then we have
Y t + 1 = d X t + 1 d X t + 1 ( α , β ) ϵ t + 1 = d X t + 1 i = 1 p α i d X t + 1 i i = 1 q β i ϵ t + 1 i ϵ t + 1 + i = 1 q β i Y t + 1 i = i = 1 q Y t + 1 i β i op ( 1 ϵ ) t + 1 q q R i = 1 q β i op ( 1 ϵ ) t + 1 q R ,
which concludes the induction. Finally, we have
X t ( α , β ) X ˜ t ( α , β ) = i = 1 q β i ( d X t i ( α , β ) d X t i ( α , β ) ϵ t i ) i = 1 q β i op Y t i ( 1 ϵ ) ( 1 ϵ ) t q q R = ( 1 ϵ ) t q R ,
which is the claimed result.  □
Next, we recursively define the following process:
d X t m ( α , β ) = i = 1 p α i d X t i + i = 1 q β i ( d X t i d X t i m i ( α , β ) ) ,
where d X t m ( α , β ) = d X t for m 0 . Let { X t m ( α , β ) } be the sequence generated as follows:
X t m ( α , β ) = d X t m ( α , β ) + i = 0 d 1 i X t 1 .
We show in the next lemma that it is close to { X t ( α , β ) } .
Lemma A2.
For any α, β, { l t } , and { ϵ t } satisfying A1–A2, we have
X t m ( α , β ) X t ( α , β ) 2 R T ,
for m = q log T log 1 1 ϵ .
Proof. 
Define Z t m = d X t m ( α , β ) d X t ( α , β ) . We prove by induction on m that
Z t m ˜ ( 1 ϵ ) m ˜ q 2 R
holds for all t = 1 , , T and 0 m ˜ m . For m = 0 , we have for t = 1 , , T
Z t 0 = d X t 0 ( α , β ) d X t ( α , β ) = d X t d X t ( α , β ) .
By the definition of the stochastic process { d X ( α , β ) } , we have
d X t + d X t ( α , β ) = d X t + i = 1 p α i d X t i + i = 1 q β i ( d X t i ( α , β ) d X t i ( α , β ) ) = d X t + i = 1 p α i d X t i + i = 1 q β i ϵ t i + i = 1 q β i ( d X t i ( α , β ) d X t i ( α , β ) ϵ t i ) = d X ˜ t ( α , β ) d X t + i = 1 q β i ( d X t i ( α , β ) d X t i ( α , β ) ϵ t i ) = d X ˜ t ( α , β ) d X t + i = 1 q β i Y t i ,
where Y t i is defined as in the proof of Lemma A1. From the assumption, we have d X ˜ t ( α , β ) d X t = ϵ t R , and, as we have proved in Lemma A1, Y t R holds. Therefore, we obtain Z t 0 2 R , which is the induction basis. Next, assume the claim holds for all 0 , , m 1 . Then we have
Z t m = i = 1 q β i ( d X t i d X t i m i ( α , β ) d X t i + d X t i ( α , β ) ) i = 1 q β i ( d X t i ( α , β ) d X t i m i ( α , β ) ) i = 1 m β i ( d X t i ( α , β ) d X t i m i ( α , β ) ) + i = m + 1 q β i ( d X t i ( α , β ) d X t i )
From the induction hypothesis, we have
d X t i ( α , β ) d X t i m i ( α , β ) ( 1 ϵ ) m i q 2 R .
From the proof of the induction basis, we have
i = m + 1 q β i ( d X t i ( α , β ) d X t i ) 2 R i = m + 1 q β i op .
Therefore, Z t m can be further bounded using
Z t m 2 R i = 1 m β i op ( 1 ϵ ) m i q + 2 R i = m + 1 q β i op 2 R i = 1 m β i op ( 1 ϵ ) m i q + 2 R i = m + 1 q β i op ( 1 ϵ ) m i q ( 1 ϵ ) m q q 2 R i = 1 q β i op ( 1 ϵ ) m q 2 R .
Choosing m q log T log 1 1 ϵ = q log 1 ϵ ( T ) 1 , we have
X t m ( α , β ) X t ( α , β ) 2 R T ,
which is the claimed result.  □
This process of the d-th-order differencing is actually an integrated AR ( m + p ) process with order d, which is shown in the following lemma.
Lemma A3.
For any data sequence { X t m ( α , β ) } generated by a process of the d-th-order differencing given by (A1) and (A2) there is a γ L ( X , X ) m + p such that
i = 1 m + p γ i d X t i + i = 0 d 1 i X t 1 = X t m ( α , β )
holds for all t.
Proof. 
Let { d X t m ( α , β ) } be the sequence generated by (A1). We prove by induction on m that for all m ˜ m there is a γ L ( X , X ) m ˜ + p such that
d X t m ˜ ( α , β ) = i = 1 m ˜ + p γ i d X t i
holds for all α and β. The induction basis follows directly from the definition that
d X t 0 ( α , β ) = i = 1 p α i d X t i .
Assume that the claim holds for some m. Let α i be the zero linear functional for i > p and β i be the zero linear functional for i > q . Then we have
d X t m + 1 ( α , β ) = i = 1 p α i d X t i + i = 1 q β i ( d X t i d X t i m + 1 i ( α , β ) ) = i = 1 p α i d X t i + i = 1 m + 1 β i d X t i i = 1 m + 1 β i d X t i m + 1 i ( α , β ) = i = 1 p α i d X t i + i = 1 m + 1 β i d X t i i = 1 m + 1 β i j = 1 m + 1 i + p γ j m + 1 i d X t i j = i = 1 p α i d X t i + i = 1 m + 1 β i d X t i i = 1 m + p + 1 ( j = 1 m + 1 β j k = 1 i j γ k m + 1 j ) d X t i ,
where the second equality follows from the fact that β i ( d X t i d X t i m + 1 i ( α , β ) ) = 0 for i > m + 1 , the third line uses the induction hypothesis and the last line is obtained by rearranging and setting i = m n a i = 0 for m > n . The induction step is obtained by setting
γ i m + 1 = α i + β i j = 1 m + 1 β j k = 1 i j γ k m + 1 j
for i = 1 , , m + p + 1 , and the claimed result follows.  □
Finally, we prove Lemma 1 by combining the results.
Proof of Lemma 1.
From Lemmas A1, A2, and A3, there is some γ L ( X , X ) m with m q log T log 1 1 ϵ + p such that
d X t ( γ ) d X t ˜ ( α , β ) = d X t m ( γ ) d X t ˜ ( α , β ) d X t m ( γ ) d X t ( α , β ) + d X t ( γ ) d X t ˜ ( α , β ) ( 1 ϵ ) t q R + 2 R T ,
which is the claimed result.  □

Appendix B

In this section, we prove the theorems in Section 4. The required notation is summarized in Appendix C. We apply some important properties of convex functions and their convex conjugate defined on a general vector space, which can be found in [17]. The proposed algorithms are instances of the adaptive optimistic follow the regularized leader (AO-FTRL) [10], which is described in Algorithm A1.
Algorithm A1 AO-FTRL.
Input: closed convex set W X
Initialize: θ 1 arbitrary
for t = 1 to T do
  Get hint h t
   w t = ψ t * ( θ t h t )
  Observe g t X *
   θ t + 1 = θ t g t
end for
Lemma A4.
We run AO-FTRL with closed convex regularizers ψ 1 , , ψ T defined on W X satisfying ψ t ( w ) ψ t + 1 ( w ) s for all w W and t = 1 , , T . Then, for all u W , we have
t = 1 T g t ( w t u ) ψ T + 1 ( u ) + ψ 1 * ( θ 1 ) + t = 1 T B ψ t * ( θ t + 1 , θ t h t ) ,
where B ψ t * ( θ t + 1 , θ t h t ) is the Bregman divergence associated with ψ t * .
Proof. 
W.l.o.g. we assume h T + 1 = 0 , since it is not involved in the algorithm. Then we have
t = 1 T ( ψ t + 1 * ( θ t + 1 h t + 1 ) ψ t * ( θ t h t ) ) = ψ T + 1 * ( θ T + 1 h T + 1 ) ( θ 1 h 1 ) w 1 + ψ 1 ( w 1 ) ( θ T + 1 h T + 1 ) u ψ T + 1 ( u ) + h 1 w 1 θ 1 w 1 + ψ 1 ( w 1 ) θ T + 1 u ψ T + 1 ( u ) + h 1 w 1 sup w W ( θ 1 w 1 ψ 1 ( w 1 ) ) = t = 1 T g t u ψ T + 1 ( u ) + h 1 w 1 ψ 1 * ( θ 1 ) . .
Furthermore, we have
ψ t + 1 * ( θ t + 1 h t + 1 ) ψ t * ( θ t h t ) = ψ t + 1 * ( θ t + 1 h t + 1 ) ψ t * ( θ t + 1 ) + ψ t * ( θ t + 1 ) ψ t * ( θ t h t ) ( θ t + 1 h t + 1 ) w t + 1 ψ t + 1 ( w t + 1 ) θ t + 1 w t + 1 + ψ t ( w t + 1 ) + ψ t * ( θ t + 1 ) ψ t * ( θ t h t ) ψ t * ( θ t + 1 ) ψ t * ( θ t h t ) h t + 1 w t + 1
Combining the inequalities above, rearranging and adding t = 1 T g t , w t to both sides, we obtain
t = 1 T g t ( w t u ) ψ T + 1 ( u ) + ψ 1 * ( θ 1 ) + t = 1 T ( ψ t * ( θ t + 1 ) ψ t * ( θ t h t ) + g t w t h t w t ) = ψ T + 1 ( u ) + ψ 1 * ( θ 1 ) + t = 1 T ( ψ t * ( θ t + 1 ) ψ t * ( θ t h t ) ( θ t + 1 θ t + h t ) ψ t * ( θ t h t ) ) = ψ T + 1 ( u ) + ψ 1 * ( θ 1 ) + t = 1 T B ψ t * ( θ t + 1 , θ t h t ) ,
which is the claimed result. □
Proof of Theorem 1.
First of all, since we have
t = 1 T l t ( X ˜ t ( γ t ) ) l t ( X ˜ t ( γ ) ) t = 1 T i = 1 m g i , t ( γ i , t γ i ) = i = 1 m ( t = 1 T g i , t ( γ i , t γ i ) ) ,
the overall regret can be considered as the sum of the regrets t = 1 T g i , t ( γ i , t γ i ) . Next, we analyse the regret of each i = 1 , m . Define ψ i , t ( γ i ) = η i , t 2 γ i F 2 . It is easy to verify γ i , t ψ i , t * ( θ i , t ) for t = 1 , , T . Applying Lemma A4 with h t = 0 , we obtain
t = 1 T g i , t ( γ i , t γ i ) ψ i , T + 1 ( γ i ) + ψ i , 1 * ( θ i , 1 ) + t = 1 T B ψ i , t * ( θ i , t + 1 , θ i , t ) .
From the updating rule of G i , t , we have g i , t = 0 for G i , t = 0 . Let t 0 be the smallest index such that G i , t 0 > 0 . Then we have
t = 1 T B ψ i , t * ( θ i , t + 1 , θ i , t ) = t = t 0 T B ψ i , t * ( θ i , t + 1 , θ i , t ) .
For G i , t > 0 , ψ i , t is η i , t -strongly convex with respect to · F . From the duality of strong convexity and strong smoothness (see Proposition 2 in [17]), we have
t = t 0 T B ψ i , t * ( θ i , t + 1 , θ i , t ) t = t 0 T 1 2 η i , t g i , t F 2 = t = t 0 T g i , t F 2 2 s = 1 t 1 g i , s F 2 + ( L t G i , t ) 2 .
From the definition of Frobenius norm, we have
g i , t F 2 = h t d X t i F 2 = h t 2 2 d X t i 2 2 h t 2 2 L t 2 L t 2 G i , t 2 .
Then, we obtain
t = t 0 T g i , t F 2 2 s = 1 t 1 g i , s F 2 + ( L t G i , t ) 2 t = t 0 T max { 1 , h t 2 L t } g i , t F 2 2 s = 1 t g i , s F 2 max { 1 , h 1 2 L 1 , , h T 2 L T } t = 1 T g i , t F 2 ( 1 + L T + 1 L 1 ) t = 1 T g i , t F 2 ( L T + 1 + L T + 1 2 L 1 ) t = 1 T d X t i 2 2 ,
where the second inequality uses Lemma 4 in [17] and the last inequality follows from the fact that g i , t F L t d X t i 2 L T + 1 d X t i 2 . Furthermore, we have
ψ i , T + 1 ( γ i ) γ i F 2 2 t = 1 T g i , t F 2 + L T + 1 G i , T + 1 γ i F 2 2 γ i F 2 L T + 1 2 t = 1 T d X t i 2 2 + L T + 1 G i , T + 1 γ i F 2 2 ,
and ψ i , 1 * ( θ i , 1 ) θ i , 1 F 2 . Adding up from 1 to m, we have
t = 1 T l t ( X ˜ t ( γ t ) ) l t ( X ˜ t ( γ ) ) i = 1 m ( γ i F 2 L T + 1 2 + L T + 1 + L T + 1 2 L 1 ) t = 1 T d X t i 2 2 + i = 1 m L T + 1 G i , T + 1 γ i F 2 + θ i , 1 F 2
Proof of Theorem 2.
Define ψ t ( γ ) = λ t γ 4 4 + λ t γ 2 2 . First of all, it is easy to verify that γ t ψ t * ( θ t ) . Applying Lemma A4 with h t = 0 , we have
t = 1 T g t x t , γ t γ F ψ T + 1 ( γ ) + ψ 1 * ( θ 1 ) + t = 1 T B ψ t * ( θ t + 1 , θ t ) .
Define v t ψ t + 1 * ( θ t ) . Then we have
B ψ t * ( θ t + 1 , θ t ) = ψ t * ( θ t + 1 ) ψ t * ( θ t ) γ t , θ t + 1 θ t F = θ t + 1 , v t F ψ t ( v t ) θ t , γ t F + ψ t ( γ t ) γ t , θ t + 1 θ t F = θ t + 1 , v t F ψ t ( v t ) + ψ t ( γ t ) γ t , θ t + 1 F = θ t + 1 , v t γ t F ψ t ( v t ) + ψ t ( γ t ) = g t x t , γ t v t F ψ t ( v t ) + ψ t ( γ t ) + θ t , v t γ t F = g t x t , γ t v t F B ψ t ( v t , γ t ) = γ t x t x t , γ t v t F + d X t x t , γ t v t F B ψ t ( v t , γ t ) = γ t x t x t , γ t v t F B ψ t ˜ ( v t , γ t ) + d X t x t , γ t v t F B ψ t ¯ ( v t , γ t ) ,
where we define ψ ˜ t ( γ ) = λ t 4 γ F 4 and ψ ¯ t ( γ ) = η t 2 γ F 2 . From the properties of the Frobenius norm, we have
γ t x t x t , γ t v t F γ t x t x t F γ t v t F x t 2 2 γ t F γ t v t F
Following the idea of [33], we can upper bound γ t F 2 γ t v t F 2 as follows:
λ t 2 γ t F 2 γ t v t F 2 = λ t 2 γ t F 2 ( γ t F 2 + v t F 2 2 γ t , v t F ) λ t 4 ( γ t F 4 + v t F 4 2 γ t F 2 v t F 2 ) + λ t 2 γ t F 2 ( γ t F 2 + v t F 2 2 γ t , v t F ) = λ t 4 v t F 4 + 3 λ t 4 γ t F 4 λ t γ t F 2 γ t , v t F = λ t 4 v t F 4 λ t 4 γ t F 4 + λ t γ t F 2 γ t , γ t F λ t γ t F 2 γ t , v t F = λ t 4 v t F 4 λ t 4 γ t F 4 λ t γ t F 2 γ t , v t γ t F = B ψ t ˜ ( v t , γ t )
Thus, for λ t 0 , we have
γ t x t x t , γ t v t F B ψ t ˜ ( v t , γ t ) 2 x t 2 4 2 λ t B ψ t ˜ ( v t , γ t ) B ψ t ˜ ( v t , γ t ) x t 2 4 2 λ t ,
where the second inequality uses the fact that 2 a b b 2 a 2 . Let t 0 be the smallest index such that λ t 0 > 0 . Then we have
t = 1 T ( γ t x t x t , γ t v t F B ψ t ˜ ( v t , γ t ) ) t = t 0 T x t 2 4 2 λ t = t = t 0 T x t 2 4 2 s = 1 t x t 2 4 t = 1 T x t 2 4 ,
where the last inequality uses Lemma 4 in [17]. Similarly, let t 1 be the smallest index such that η t 0 > 0 . Then we obtain the upper bound
t = 1 T ( d X t x t , γ t v t F B ψ ¯ t ( v t , γ t ) ) t = 1 T ( d X t x t F γ t v t F B ψ ¯ t ( v t , γ t ) ) t = t 1 T ( 2 d X t x t F 2 η t B ψ ¯ t ( v t , γ t ) B ψ ¯ t ( v t , γ t ) ) t = t 1 T ( 2 d X t x t F 2 2 η t B ψ ¯ t ( v t , γ t ) B ψ ¯ t ( v t , γ t ) ) t = t 1 T d X t x t F 2 2 η t = t = t 1 T d X t x t F 2 2 s = 1 t 1 d X s x s F 2 + L t 2 x t 2 2 max { 1 , d X 1 x 1 F G 1 , , d X T x T F G T } t = t 1 T d X t x t F 2 2 s = 1 t d X s x s F 2 max { 1 , d X 1 x 1 F G 1 , , d X T x T F G T } t = 1 T d X t x t F 2 ( 1 + G T + 1 G 1 ) t = 1 T d X t x t F 2
Combining (A3)–(A6), we obtain
t = 1 T g t x t , γ t γ F ( m G T + 1 2 + θ 1 F ) γ F 2 2 + ψ 1 * ( θ 1 ) + ( 1 + γ F 4 4 ) t = 1 T x t 2 4 + ( 1 + G T + 1 G 1 + γ F 2 2 ) t = 1 T d X t x t F 2 .
For θ 1 0 , it is easy to verify that ψ 1 * ( θ 1 ) w 1 , θ 1 F θ 1 F 2 η 1 θ 1 F . By putting this in the inequality above, we obtain the claimed result. □
Proof of Theorem 3
Proof. 
Define
ψ t : Δ R , w η t k I w K w k log w k + η t log K ,
where I w = { i = 1 , , k | w i 0 } . It can be verified that w t ψ t * ( θ t ) . Applying Lemma A4, we obtain
t = 1 T z t ( w t u ) ψ T + 1 ( u ) + ψ 1 * ( θ 1 ) + t = 1 T B ψ t * ( θ t + 1 , θ t h t ) .
From the definition of ψ t , it follows that ψ T + 1 ( u ) log K 2 t = 1 T z t h t 2 and ψ 1 * ( θ 1 ) = 0 hold. Define v t ψ t * ( θ t + 1 ) . Next, we bound the third term as follows:
B ψ t * ( θ t + 1 , θ t h t ) = ψ t * ( θ t + 1 ) ψ t * ( θ t h t ) ( h t z t ) w t = θ t + 1 v t ψ t ( v t ) ( θ t h t ) w t + ψ t ( w t ) ( h t z t ) w t = ( h t z t ) ( v t w t ) ( ψ t ( v t ) ψ t ( w t ) ( θ t h t ) ( v t w t ) ) = ( h t z t ) ( v t w t ) B ψ t ( v t , w t ) = ( h t z t ) ( v t w t ) η t + 1 v t w t 1 2 + η t + 1 v t w t 1 2 B ψ t ( v t , w t ) ( h t z t ) ( v t w t ) η t + 1 v t w t 1 2 + ( η t + 1 η t ) v t w t 1 2 h t z t v t w t 1 η t + 1 v t w t 1 2 + 4 ( η t + 1 η t ) h t z t 2 4 η t + 1 + 4 ( η t + 1 η t ) ,
where the first inequality uses the fact that ψ t is 2 η t strongly convex w.r.t. · 1 . Adding up from 1 to T, we have
t = 1 T B ψ t * ( θ t + 1 , θ t h t ) t = 1 T ( h t z t 2 4 η t + 1 + 4 ( η t + 1 η t ) ) log K 2 t = 1 T h t z t 2 + 4 η T + 1 log K 2 t = 1 T h t z t 2 + 8 log K t = 1 T h t z t 2 .
Combining the inequalities, we obtain
t = 1 T l ( X t , i = 1 K w i , t X t i ˜ ) t = 1 T l ( X t , X t k ˜ ) t = 1 T i = 1 K w i , t l ( X t , X t i ˜ ) t = 1 T l ( X t , X t k ˜ ) = t = 1 T w t z t t = 1 T l ( X t , X t k ˜ ) ( 2 log K + 8 log K ) t = 1 T h t z t 2 ,
where the first inequality follows from Jensen’s inequality. Furthermore, if l is L-Lipschitz in its first argument, then we have
h t z t = max i { 1 , , K } | z i , t h i , t | L d X t 2 .
Finally, we obtain the regret upper bound
t = 1 T l ( X t , i = 1 K w i , t X t i ˜ ) t = 1 T l ( X t , X t k ˜ ) 2 log K + 8 log K t = 1 T L 2 d X t 2 2 ,
which is the claimed result. □

Appendix C

We summarize the main notations used throughout the article in Table A1.
Table A1. Nomenclature.
Table A1. Nomenclature.
( X , · ) finite dimensional norm space
( X * , · * ) the dual space with dual norm of ( X , · )
L ( X , X ) vector space of bounded linear operators
α op = sup x X , x 0 α x x the operator norm of α L ( X , X )
x 2 = i = 1 d x i 2 2 norm for x R d
x 1 = i = 1 d | x i | 1 norm for x R d
x = max { | x 1 | , , | x d | } max norm for x R d
A , B F = tr ( A B ) Frobenius inner product
A F = A , A F Frobenius norm
Δ d : { x R d | i = 1 d x i = 1 , x i 0 } standard d-simplex
ψ : W R closed convex function
ψ ( w ) = { g X * | v W . ψ ( v ) ψ ( w ) g ( v w ) } the set of subdifferential of ψ at w
ψ * : X * R , θ sup w W θ w ψ ( w ) convex conjugate of ψ
B ψ ( u , v ) = ψ ( u ) ψ ( v ) g ( u v ) , where g ψ ( u ) the Bregman divergence

Appendix D

For the synthetic data, the relative performance of the proposed algorithms after the first 1000 iterations are plotted in Figure A1, Figure A2, and Figure A3. For each setting, we calculate the average loss after the first 1000 iterations and plot the difference of the proposed algorithms compared to the average loss incurred by the best baseline algorithm.
Figure A1. Relative performance for setting 1.
Figure A1. Relative performance for setting 1.
Mathematics 09 01523 g0a1
Figure A2. Relative performance for setting 2.
Figure A2. Relative performance for setting 2.
Mathematics 09 01523 g0a2
Figure A3. Relative performance for setting 3.
Figure A3. Relative performance for setting 3.
Mathematics 09 01523 g0a3
Similarly, we plot the relative performance for the real-world data over the time horizon in Figure A4, Figure A5 and Figure A6.
Figure A4. Relative performance for stock data.
Figure A4. Relative performance for stock data.
Mathematics 09 01523 g0a4
Figure A5. Relative performance for Google Flu.
Figure A5. Relative performance for Google Flu.
Mathematics 09 01523 g0a5
Figure A6. Relative Performance for electricity demand.
Figure A6. Relative Performance for electricity demand.
Mathematics 09 01523 g0a6

References

  1. Shumway, R.; Stoffer, D. Time Series Analysis and Its Applications: With R Examples; Springer Texts in Statistics; Springer: New York, NY, USA, 2010. [Google Scholar]
  2. Chujai, P.; Kerdprasop, N.; Kerdprasop, K. Time series analysis of household electric consumption with ARIMA and ARMA models. In Proceedings of the International MultiConference of Engineers and Computer Scientists, Hong Kong, China, 13–15 March 2013; Volume 1, pp. 295–300. [Google Scholar]
  3. Ghofrani, M.; Arabali, A.; Etezadi-Amoli, M.; Fadali, M.S. Smart scheduling and cost-benefit analysis of grid-enabled electric vehicles for wind power integration. IEEE Trans. Smart Grid 2014, 5, 2306–2313. [Google Scholar] [CrossRef]
  4. Rounaghi, M.M.; Zadeh, F.N. Investigation of market efficiency and financial stability between S&P 500 and London stock exchange: Monthly and yearly forecasting of time series stock returns using ARMA model. Phys. A Stat. Mech. Its Appl. 2016, 456, 10–21. [Google Scholar]
  5. Zhu, B.; Chevallier, J. Carbon price forecasting with a hybrid Arima and least squares support vector machines methodology. In Pricing and Forecasting Carbon Markets; Springer: Berlin/Heidelberg, Germany, 2017; pp. 87–107. [Google Scholar]
  6. Anava, O.; Hazan, E.; Mannor, S.; Shamir, O. Online learning for time series prediction. In Proceedings of the Conference on Learning Theory, Princeton, NJ, USA, 23–26 June 2013; pp. 172–184. [Google Scholar]
  7. Liu, C.; Hoi, S.C.; Zhao, P.; Sun, J. Online ARIMA algorithms for time series prediction. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 1867–1873. [Google Scholar]
  8. Xie, C.; Bijral, A.; Ferres, J.L. Nonstop: A nonstationary online prediction method for time series. IEEE Signal Process. Lett. 2018, 25, 1545–1549. [Google Scholar] [CrossRef] [Green Version]
  9. Yang, H.; Pan, Z.; Tao, Q.; Qiu, J. Online learning for vector autoregressive moving-average time series prediction. Neurocomputing 2018, 315, 9–17. [Google Scholar] [CrossRef]
  10. Joulani, P.; György, A.; Szepesvári, C. A modular analysis of adaptive (non-) convex optimization: Optimism, composite objectives, variance reduction, and variational bounds. Theor. Comput. Sci. 2020, 808, 108–138. [Google Scholar] [CrossRef]
  11. Zhou, Y.; Sanches Portella, V.; Schmidt, M.; Harvey, N. Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses. Adv. Neural Inf. Process. Syst. 2020, 33, 15823–15833. [Google Scholar]
  12. Jamil, W.; Bouchachia, A. Model selection in online learning for times series forecasting. In UK Workshop on Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2018; pp. 83–95. [Google Scholar]
  13. Jamil, W.; Kalnishkan, Y.; Bouchachia, H. Aggregation Algorithm vs. Average For Time Series Prediction. In Proceedings of the ECML PKDD 2016 Workshop on Large-Scale Learning from Data Streams in Evolving Environments, Riva del Garda, Italy, 23 September 2016; pp. 1–14. [Google Scholar]
  14. Orabona, F.; Pál, D. Coin betting and parameter-free online learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 4–9 December 2016; pp. 577–585. [Google Scholar]
  15. Cutkosky, A.; Orabona, F. Black-box reductions for parameter-free online learning in banach spaces. In Proceedings of the Conference on Learning Theory, Stockholm, Sweden, 6–9 July 2018; pp. 1493–1529. [Google Scholar]
  16. Cutkosky, A.; Boahen, K. Online learning without prior information. In Proceedings of the Conference on Learning Theory, Amsterdam, The Netherlands, 7–10 July 2017; pp. 643–677. [Google Scholar]
  17. Orabona, F.; Pál, D. Scale-free online learning. Theor. Comput. Sci. 2018, 716, 50–69. [Google Scholar] [CrossRef] [Green Version]
  18. Hamilton, J.D. Time Series Analysis; Princeton University Press: Princeton, NJ, USA, 1994; Volume 2. [Google Scholar]
  19. Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  20. Brockwell, P.J.; Davis, R.A. Time Series: Theory and Methods; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  21. Georgiou, T.T.; Lindquist, A. A convex optimization approach to ARMA modeling. IEEE Trans. Autom. Control 2008, 53, 1108–1119. [Google Scholar] [CrossRef]
  22. Lii, K.S. Identification and estimation of non-Gaussian ARMA processes. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1266–1276. [Google Scholar] [CrossRef]
  23. Huang, S.J.; Shih, K.R. Short-term load forecasting via ARMA model identification including non-Gaussian process considerations. IEEE Trans. Power Syst. 2003, 18, 673–679. [Google Scholar] [CrossRef] [Green Version]
  24. Ding, F.; Shi, Y.; Chen, T. Performance analysis of estimation algorithms of nonstationary ARMA processes. IEEE Trans. Signal Process. 2006, 54, 1041–1053. [Google Scholar] [CrossRef]
  25. Yang, H.; Pan, Z.; Tao, Q. Online Learning for Time Series Prediction of AR Model with Missing Data. Neural Process. Lett. 2019, 50, 2247–2263. [Google Scholar] [CrossRef]
  26. Ding, J.; Noshad, M.; Tarokh, V. Order selection of autoregressive processes using bridge criterion. In Proceedings of the 2015 IEEE International Conference on Data Mining Workshop (ICDMW), Atlantic City, NJ, USA, 14–17 November 2015; pp. 615–622. [Google Scholar]
  27. Lütkepohl, H. New Introduction to Multiple Time Series Analysis; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  28. Steinhardt, J.; Liang, P. Adaptivity and optimism: An improved exponentiated gradient algorithm. In Proceedings of the International Conference on Machine Learning, PMLR, Bejing, China, 22–24 June 2014; pp. 1593–1601. [Google Scholar]
  29. De Rooij, S.; Van Erven, T.; Grünwald, P.D.; Koolen, W.M. Follow the leader if you can, hedge if you must. J. Mach. Learn. Res. 2014, 15, 1281–1316. [Google Scholar]
  30. Bollerslev, T. Generalized autoregressive conditional heteroskedasticity. J. Econom. 1986, 31, 307–327. [Google Scholar] [CrossRef] [Green Version]
  31. Deng, Y.; Fan, H.; Wu, S. A hybrid ARIMA-LSTM model optimized by BP in the forecast of outpatient visits. J. Ambient. Intell. Humaniz. Comput. 2020. [Google Scholar] [CrossRef]
  32. Tutun, S.; Chou, C.A.; Canıyılmaz, E. A new forecasting framework for volatile behavior in net electricity consumption: A case study in Turkey. Energy 2015, 93, 2406–2422. [Google Scholar] [CrossRef]
  33. Lu, H. “Relative Continuity” for Non-Lipschitz Nonsmooth Convex Optimization Using Stochastic (or Deterministic) Mirror Descent. Informs J. Optim. 2019, 1, 288–303. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Results for setting 1 (sanity check), using a stationary ARIMA(5,2,1) model.
Figure 1. Results for setting 1 (sanity check), using a stationary ARIMA(5,2,1) model.
Mathematics 09 01523 g001
Figure 2. Results for setting 2 (time-varying parameters), using a non-stationary ARIMA(5,2,1) model.
Figure 2. Results for setting 2 (time-varying parameters), using a non-stationary ARIMA(5,2,1) model.
Mathematics 09 01523 g002
Figure 3. Results for setting 3 (time-varying models), using a combination of stationary ARIMA(5,2,1) and ARIMA(5,2,0) models.
Figure 3. Results for setting 3 (time-varying models), using a combination of stationary ARIMA(5,2,1) and ARIMA(5,2,0) models.
Mathematics 09 01523 g003
Figure 4. Results for stock data.
Figure 4. Results for stock data.
Mathematics 09 01523 g004
Figure 5. Results for Google Flu data.
Figure 5. Results for Google Flu data.
Mathematics 09 01523 g005
Figure 6. Results for electricity demand data.
Figure 6. Results for electricity demand data.
Mathematics 09 01523 g006
Figure 7. Model selection in setting 1.
Figure 7. Model selection in setting 1.
Mathematics 09 01523 g007
Figure 8. Model selection in setting 2.
Figure 8. Model selection in setting 2.
Mathematics 09 01523 g008
Figure 9. Model selection in setting 3.
Figure 9. Model selection in setting 3.
Mathematics 09 01523 g009
Figure 10. Model selection for stock data.
Figure 10. Model selection for stock data.
Mathematics 09 01523 g010
Figure 11. Model selection for Google Flu.
Figure 11. Model selection for Google Flu.
Mathematics 09 01523 g011
Figure 12. Model Selection for electricity demand.
Figure 12. Model Selection for electricity demand.
Mathematics 09 01523 g012
Table 1. Algorithms for online learning of ARIMA.
Table 1. Algorithms for online learning of ARIMA.
ProblemAlgorithmReferenceTuning-FreeLoss FunctionRegret Dependence
OL for ARIMAOGD[6,7,8,9]anylargest gradient norm
OL for ARIMAONS[6,7,8,9]exp-concavelargest gradient norm
PF-OCOCoin Betting[14,15]normalized gradientgradient vectors
PF-OCOFreeRex[16]anylargest gradient norm
PF-OCOSF-MD[17]anygradient vectors
PF-OCOSOLO-FTRL[17]anylargest gradient norm
OL for ARIMAAlgorithm 1This PaperLipschitzdata sequence
OL for ARIMAAlgorithm 2This Papersquared errordata sequence
OMS for ARIMAEG[12,13]boundedloss of the worst model
OMS for ARIMAAlgorithm 3This Paperlocal Lipschitzdata sequence
For non-Lipschitz-continuous loss functions, the gradient norm can be unbounded. These algorithms with performance depending on the gradient norm can fail without making further assumptions on the data generation. For OGD, the learning rate and the diameter of the decision set need to be tuned in practice. ONS has an additional hyperparameter controlling the numerical stability. Applying SF-MD to ARIMA, the diameter of the model parameter has to be tuned. To obtain optimal performance, the learning rate of EG has to be tuned.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shao, W.; Radke, L.F.; Sivrikaya, F.; Albayrak, S. Adaptive Online Learning for the Autoregressive Integrated Moving Average Models. Mathematics 2021, 9, 1523. https://doi.org/10.3390/math9131523

AMA Style

Shao W, Radke LF, Sivrikaya F, Albayrak S. Adaptive Online Learning for the Autoregressive Integrated Moving Average Models. Mathematics. 2021; 9(13):1523. https://doi.org/10.3390/math9131523

Chicago/Turabian Style

Shao, Weijia, Lukas Friedemann Radke, Fikret Sivrikaya, and Sahin Albayrak. 2021. "Adaptive Online Learning for the Autoregressive Integrated Moving Average Models" Mathematics 9, no. 13: 1523. https://doi.org/10.3390/math9131523

APA Style

Shao, W., Radke, L. F., Sivrikaya, F., & Albayrak, S. (2021). Adaptive Online Learning for the Autoregressive Integrated Moving Average Models. Mathematics, 9(13), 1523. https://doi.org/10.3390/math9131523

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop