Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 11 February 2021
Sec. Mathematics of Computation and Data Science
Volume 6 - 2020 | https://doi.org/10.3389/fams.2020.551138

Financial Forecasting With α-RNNs: A Time Series Modeling Approach

  • 1Department of Applied Math, Illinois Institute of Technology, Chicago, IL, United States
  • 2Stuart School of Business, Illinois Institute of Technology, Chicago, IL, United States

The era of modern financial data modeling seeks machine learning techniques which are suitable for noisy and non-stationary big data. We demonstrate how a general class of exponential smoothed recurrent neural networks (α-RNNs) are well suited to modeling dynamical systems arising in big data applications such as high frequency and algorithmic trading. Application of exponentially smoothed RNNs to minute level Bitcoin prices and CME futures tick data, highlight the efficacy of exponential smoothing for multi-step time series forecasting. Our α-RNNs are also compared with more complex, “black-box”, architectures such as GRUs and LSTMs and shown to provide comparable performance, but with far fewer model parameters and network complexity.

1. Introduction

Recurrent neural networks (RNNs) are the building blocks of modern sequential learning. RNNs use recurrent layers to capture non-linear temporal dependencies with a relatively small number of parameters (Graves, 2013). They learn temporal dynamics by mapping an input sequence to a hidden state sequence and outputs, via a recurrent layer and a feedforward layer.

There have been exhaustive empirical studies on the application of recurrent neural networks to prediction from financial time series data such as historical limit order book and price history (Borovykh et al., 2017; Dixon, 2018; Borovkova and Tsiamas, 2019; Chen and Ge, 2019; Mäkinen et al., 2019; Sirignano and Cont, 2019). Sirignano and Cont (2019) find evidence that stacking networks leads to superior performance on intra-day stock data combined with technical indicators, whereas (Bao et al., 2017) combine wavelet transforms and stacked autoencoders with LSTMs on OHLC bars and technical indicators. Borovykh et al. (2017) find evidence that dilated convolutional networks out-perform LSTMs on various indices. Dixon (2018) demonstrate that RNNs outperform feed-forward networks with lagged features on limit order book data.

There appears to be a chasm between the statistical modeling literature (see, e.g., Box and Jenkins 1976; Kirchgässner and Wolters 2007; Hamilton 1994) and the machine learning literature (see. e.g., Hochreiter and Schmidhuber 1997; Pascanu et al. 2012; Bayer 2015). One of the main contributions of this paper is to demonstrate how RNNs, and specifically a class of novel exponentially smoothed RNNs (α-RNNs), proposed in (Dixon, 2021), can be used in a financial time series modeling framework. In this framework, we rely on statistical diagnostics in combination with cross-validation to identify the best choice of architecture. These statistical tests characterize stationarity and memory cut-off length and provide insight into whether the data is suitable for longer-term forecasting and whether the model must be non-stationary.

In contrast to state-of-the-art RNNs such as LSTMs and Gated Recurrent Units (GRUs) (Chung et al., 2014), which were designed primarily for speech transcription, the proposed class of α-RNNs is designed for times series forecasting using numeric data. α-RNNs not only alleviate the gradient problem but are designed to i) require fewer parameters and numbers of recurrent units and considerably fewer samples to attain the same prediction accuracy1; ii) support both stationary and non-stationary times series2; and iii) be mathematically accessible and characterized in terms of well known concepts in classical time series modeling, rather than appealing to logic and circuit diagrams.

As a result, through simple analysis of the time series properties of α-RNNs, we show how the value of the smoothing parameter, α, directly characterizes its dynamical behavior and provides a model which is both more intuitive for time series modeling than GRUs and LSTMs while performing comparably. We argue that for time series modeling problems in finance, some of the more complicated components, such as reset gates and cell memory present in GRUs and LSTMs but absent in α-RNNs, may be redundant for our data. We exploit these properties in two ways i) first, we using a statistical test for stationarity to determine whether to deploy a static or dynamic α-RNN model; and ii) we are able to reduce the training time, memory requirements for storing the model, and in general expect α-RNN to be more accurate for shorter time series as they require less training data and are less prone to over-fitting. The latter is a point of practicality as many applications in finance are not necessarily big data problems, and the restrictive amount of data favors an architecture with fewer parameters to avoid over-fitting.

The remainder of this paper is outlined as follows. Section 2 introduces the static α-RNN. Section 3 bridges the time series modeling approach with RNNs to provide insight on the network properties. Section 4 introduces a dynamic version of the model and illustrates the dynamical behavior of α. Details of the training, implementation and experiments using financial data together with the results are presented in Section 5. Finally, Section 6 concludes with directions for future research.

2. α-RNNs

Given auto-correlated observations of covariates or predictors, xt, and continuous responses yt at times t=1,,N, in the time series data D:={(xt,yt)}t=1N, our goal is to construct an m-step (m>0) ahead times series predictor, y^t+m=F(x_t), of an observed target, yt+mn, from a p length input sequence x_t

yt+m:=F(x_t)+ut,wherex_t:={xtp+1,,xt},

xtj=:Lj[xt] is the jth lagged observation of xtd, for j=0,,p1 and ut is the homoscedastic model error at time t. We introduce the α-RNN model (as shown in Figure 1):

y^t+m=FW,b,α(x_t)(1)

where FW,b,α(x_t) is an α[0,1] smoothed RNN with weight matrices W:=(Wh,Uh,Wy), where the input weight matrix WhH×d, the recurrence weight matrix UhH×H, the output weight matrix Wyn×H, and H is the number of hidden units. The hidden and output bias vectors are given by b:=(bh,by).

FIGURE 1
www.frontiersin.org

FIGURE 1. An illustrative example of an α-RNN with an alternating hidden recurrent layer (with blue nodes) and a smoothing layer (white block), “unfolded” over a sequence of six time steps. Each lagged feature xti in the sequence x_t is denoted by the yellow nodes. The hidden recurrent layer contains H units (blue nodes) and the ith output, after smoothing, at time step t is denoted by h˜ti. At the last time step t, the hidden units connect to a single unactivated output unit to give y^t+m (red node).

For each index in a sequence, s = t-p+2, … ,t, forward passes repeatedly update a hidden internal state h^sH, using the following model:

(output)y^t+m=Wyh^t+by,(hidden state update)h^s=σ(Uhh˜s1+Whxs+bh),s=tp+2,,t(smoothing)h˜s=αh^s+(1α)h˜s1,

where σ():=tanh() is the activation function and h˜sH is an exponentially smoothed version of the hidden state h^s, with the starting condition in each sequence, h^tp+1=σ(Whxtp+1).

3. Univariate Times Series Modeling With Endogenous Features

This section bridges the time series modeling literature (Box and Jenkins, 1976; Kirchgässner and Wolters, 2007; Li and Zhu, 2020) and the machine learning literature. More precisely, we show the conditions under which plain RNNs are identical to autoregressive time series models and thus how RNNs generalize autoregressive models. Then we build on this result by applying time series analysis to characterize the behavior of static α-RNNs.

We shall assume here for ease of exposition that the time series data is univariate and the predictor is endogenous3, so that the data is D:={yt}t=1N.

We find it instructive to show that plain RNNs are non-linear AR(p) models. For ease of exposition, consider the simplest case of a RNN with one hidden unit, H=1. Without loss of generality, we set Uh=Wh=ϕ, Wy=1, bh=0 and by=μ. Under backward substitution, a plain-RNN, FW,b(x_t), with sequence length p, is a non-linear auto-regressive, NAR(p), model of order p: :

h^tp+1=σ(ϕytp+1)h^tp+2=σ(ϕh^tp+1+ϕytp+2)=h^t=σ(ϕh^t1+ϕyt)y^t+m=h^t+μ

then

y^t+m=μ+σ(ϕ(1+σ(ϕ(L+σ(ϕ(L2++σ(ϕLp1))[yt].(2)

When the activation is the identity function σ:=Id, then we recover the AR(p) model

y^t+m=μ+i=0p1ϕi+1Li[yt],ϕi:=ϕi.(3)

with geometrically decaying autoregressive coefficients when |ϕ|<1.

The α-RNN(p) is almost identical to a plain RNN, but with an additional scalar smoothing parameter, α, which provides the recurrent network with “long-memory”4. To see this, let us consider a one-step ahead univariate α-RNN(p) in which the smoothing parameter is fixed and H=1.

This model augments the plain-RNN by replacing h^s1 in the hidden layer with an exponentially smoothed hidden state h˜s1. The effect of the smoothing is to provide infinite memory when α1. For the special case when α=1, we recover the plain RNN with short memory of length pN.

We can easily verify this informally by simplifying the parameterization and considering the unactivated case. Setting by=bh=0, Uh=Wh=ϕ and Wy=1:

y^t+1=h^t,(4)
=ϕ(h˜t1+yt),(5)
=ϕ(αh^t1+(1α)h˜t2+yt),(6)

with the starting condition in each sequence, h^tp+1=ϕytp+1. With out loss of generality, consider p=2 lags in the model so that h^t1=ϕyt1. Then

h^t=ϕ(αϕyt1+(1α)h˜t2+yt)(7)

and the model can be written in the simpler form

y^t+1=ϕ1yt+ϕ2yt1+ϕ(1α)h˜t2,(8)

with auto-regressive weights ϕ1:=ϕ and ϕ2:=αϕ2. We now see that there is a third term on the RHS of Eq. 8 which vanishes when α=1 but provides infinite memory to the model since h˜t2 depends on y1, the first observation in the whole time series, not just the first observation in the sequence. To see this, we unroll the recursion relation in the exponential smoother:

h˜t+1=αs=0t1(1α)sh^ts+(1α)ty1.(9)

where we used the property that h˜1=y1. It is often convenient to characterize exponential smoothing by the half-life5. To gain further insight on the memory of the network, Dixon (2021) study the partial auto-correlations of the process y^t+m+ut to characterize the memory and derive various properties and constraints needed for network stability and sequence length selection.

4. Multivariate Dynamic α-RNNS

We now return to the more general multivariate setting as in Section 2. The extension of RNNs to dynamical time series models, suitable for non-stationary time series data, relies on dynamic exponential smoothing. This is a time dependent, convex, combination of the smoothed output, h˜t, and the hidden state h^t:

h˜t+1=αth^t+(1αt)h˜t,(10)

where denotes the Hadamard product between vectors and where αt[0,1]H denotes the dynamic smoothing factor which can be equivalently written in the one-step-ahead forecast of the form

h˜t+1=h˜t+αt(h^th˜t).(11)

Hence the smoothing can be viewed as a dynamic form of latent forecast error correction. When (αt)i=0, the ith component of the latent forecast error is ignored and the smoothing merely repeats the ith component of the current hidden state (h˜t)i, which enforces the removal of the ith component from the memory. When (αt)i=1, the latent forecast error overwrites the current ith component of the hidden state (h˜t)i. The smoothing can also be viewed as a weighted sum of the lagged observations, with lower or equal weights, αtsr=1s(1αtr+1) at the s1 lagged hidden state, h^ts:

h˜t+1=αth^t+s=1t1αtssr=1(1αtr+1)h^ts+g(α),

where g(α):=r=0t1(1αtr)y˜1. Note that for any (αtr+1)i=1, the ith component of the smoothed hidden state (h˜t+1)i will have no dependency on all the lagged ith components of hidden states {(h^ts)i}sr. The model simply forgets the ith component of the hidden states at or beyond the rth lag.

4.1. Neural Network Exponential Smoothing

While the class of αt-RNN models under consideration is free to define how α is updated (including changing the frequency of the update) based on the hidden state and input, a convenient choice is use a recurrent layer. Remaining in the more general setup with a hidden state vector h^tH, let us model the smoothing parameter α^t[0,1]H to give a filtered time series

h˜t=α^th^t+(1α^t)h˜t1.(12)

This smoothing is a vectorized form of the above classical setting, only here we note that when (αt)i=1, the ith component of the hidden variable is unmodified and the past filtered hidden variable is forgotten. On the other hand, when (αt)i=0, the ith component of the hidden variable is obsolete, instead setting the current filtered hidden variable to its past value. The smoothing in Eq. 12 can be viewed then as updating long-term memory, maintaining a smoothed hidden state variable as the memory through a convex combination of the current hidden variable and the previous smoothed hidden variable.

The hidden variable is given by the semi-affine transformation:

h^t=σ(Uhh˜t1+Whxt+bh),(13)

which in turn depends on the previous smoothed hidden variable. Substituting Eq. 13 into Eq. 12 gives a function of h˜t1 and xt:

h˜t=g(h˜t1,xt;α)(14)
:=α^tσ(Uhh˜t1+Whxt+bh)+(1α^t)h˜t1.(15)

We see that when (αt)i=0, the ith component of the smoothed hidden variable (h˜t)i is not updated by the input xt. Conversely, when (αt)i=1, we observe that the ith hidden variable locally behaves like a non-linear autoregressive series. Thus the smoothing parameter can be viewed as the sensitivity of the smoothed hidden state to the input xt.

The challenge becomes how to determine dynamically how much error correction is needed. As in GRUs and LSTMs, we can address this problem by learning α^=F(Wα,Uα,bα)(x_t) from the input variables with the recurrent layer parameterized by weights and biases (Wα,Uα,bα). The one-step ahead forecast of the smoothed hidden state, h˜t, is the filtered output of another plain RNN with weights and biases (Wh,Uh,bh).

5. Results

This section describes numerical experiments using financial time series data to evaluate the various RNN models. All models are implemented in v1.15.0 of TensorFlow (Abadi et al., 2016). Times series cross-validation is performed using separate training, validation and test sets. To preserve the time structure of the data and avoid look ahead bias, each set represents a contiguous sampling period with the test set containing the most recent observations. To prepare the training, validation and testing sets for m-step ahead prediction, we set the target variables (responses) to the t+m observation, yt+m, and use the lags from tp+1,t for each input sequence. This is repeated by incrementing t until the end of each set. In our experiments, each element in the input sequence is either a scalar or vector and the target variables are scalar.

We use the SimpleRNN Keras method with the default settings to implement a fully connected RNN. Tanh activation functions are used for the hidden layer with the number of units found by time series cross-validation with five folds to be H{5,10,20} and L1 regularization, λ1{0,103,102}. The Glorot and Bengio uniform method (Glorot and Bengio, 2010) is used to initialize the non-recurrent weight matrices and an orthogonal method is used to initialize the recurrence weights as a random orthogonal matrix. Keras’s GRU method is implemented using version 1,406.1078v, which applies the reset gate to the hidden state before matrix multiplication. See Appendix 1.1 for a definition of the reset gate. Similarly, the LSTM method in Keras is used. Tanh activation functions are used for the recurrence layer and sigmoid activation functions are used for all other gates. The AlphaRNN and AlphatRNN classes are implemented by the authors for use in Keras. Statefulness is always disabled.

Each architecture is trained for up to 2000 epochs with an Adam optimization algorithm with default parameter values and using a mini-batch size of 1,000 drawn from the training set. Early stopping is implemented using a Keras call back with a patience of 50 to 100 and a minimum loss delta between 108 and 106. So, for example, if the patience is set to 50 and the minimum loss delta is 108, then fifty consecutive loss evaluations on mini-batch updates must each lie within 108 of each other before the training terminates. In practice, the actual number of epoches required varies between trainings due to the randomization of the weights and biases, and across different architectures and is typically between 200 and 1,500. The 2000 epoch limit is chosen as it provides an upper limit which is rarely encountered. No random permutations are used in the mini-batching sampling in order to preserve the ordering of the time series data. To evaluate the forecasting accuracy, we set the forecast horizon to up to ten steps ahead instead of the usual step ahead forecasts often presented in the machine learning literature—longer forecasting horizons are often more relevant due to operational constraints in industry applications and are more challenging when the data is non-stationary since the fixed partial auto-correlation of the process y^t+m+ut will not adequately capture the observed changing partial auto-correlation structure of the data. In the experiments below, we use m=4 and m=10 steps ahead. The reason we use less than m=10 in the first experiment is because we find that there is little memory in the data beyond four lags and hence it is of little value to predict beyond four time steps.

5.1. Bitcoin Forecasting

One minute snapshots of USD denominated Bitcoin mid-prices are captured from Coinbase over the period from January 1 to November 10, 2018. We demonstrate how the different networks forecast Bitcoin prices using lagged observations of prices. The predictor in the training and the test set is normalized using the moments of the training data only so as to avoid look-ahead bias or introduce a bias in the test data. We accept the Null hypothesis of the augmented Dickey-Fuller test as we can not reject it at even the 90% confidence level. The data is therefore stationary (contains at least one unit root). The largest test statistic is 2.094 and the p-value is 0.237 (the critical values are 1%: -3.431, 5%: -2.862, and 10%: -2.567). While the partial autocovariance structure is expected to be time dependent, we observe a short memory of only four lags by estimating the PACF over the entire history (see Figure 2).

FIGURE 2
www.frontiersin.org

FIGURE 2. The partial autocorrelogram (PACF) for 1 min snapshots of Bitcoin mid-prices (USD) over the period January 1, 2018 to November 10, 2018.

We choose a sequence length of p=4 based on the PACF and perform a four-step ahead forecast. We comment in passing that there is little, if any, merit in forecasting beyond this time horizon given the largest significant lag indicated by the PACF. Figure 3 compares the performance of the various forecasting networks and shows that stationary models such as the plain RNN and the α-RNN least capture the price dynamics—this is expected because the partial autocorrelation is non-stationary.

FIGURE 3
www.frontiersin.org

FIGURE 3. The four-step ahead forecasts of temperature using the minute snapshot Bitcoin prices (USD) with MSEs shown in parentheses. (top) The forecasts for each architecture and the observed out-of-sample time series. (bottom) The errors for each architecture over the same test period. Note that the prices have been standardized.

Viewing the results of time series cross validation, using the first 30,000 observations, in Table 1, we observe minor differences in the out-of-sample performance of the LSTM, GRU vs. the αt-RNN, suggesting that the reset gate and extra cellular memory in the LSTM provides negligible benefit for this dataset. In this case, we observe very marginal additional benefit in the LSTM, yet the complexity of the latter is approximately 50x that of the αt-RNN. Furthermore we observe evidence of strong over-fitting in the GRU and LSTM vs. the αt-RNN. The ratio of training to test errors are respectively 0.596 and 0.603 vs. 0.783. The ratio of training to validation errors are 0.863 and 0.862 vs. 0.898.

TABLE 1
www.frontiersin.org

TABLE 1. Thefour-stepahead Bitcoin forecasts are compared for various architectures using time series cross-validation. The half-life of the α-RNN is found to be 1.077 min (α^=0.4744).

5.2. High Frequency Trading Data

Our dataset consists of N=1,033,468 observations of tick-by-tick Volume Weighted Average Prices (VWAPs) of CME listed ESU6 level II data over the month of August 2016 (Dixon, 2018; Dixon et al., 2019).

We reject the Null hypothesis of the augmented Dickey-Fuller test at the 99% confidence level in favor of the alternative hypothesis that the data is stationary (contains no unit roots. See for example (Tsay, 2010) for a definition of unit roots and details of the Dickey-Fuller test). The test statistic is 5.243 and the p-value is 7.16×106 (the critical values are 1%: –3.431, 5%: –2.862, and 10%: –2.567).

The PACF in Figure 4 is observed to exhibit a cut-off at approximately 23 lags. We therefore choose a sequence length of p=23 and perform a ten-step ahead forecast. Note that the time-stamps of the tick data are not uniform and hence a step refers to a tick.

FIGURE 4
www.frontiersin.org

FIGURE 4. The PACF of the tick-by-tick VWAP of ESU6 over the month of August 2016.

Figure 5 compares the performance of the various networks and shows that plain RNN performs poorly, whereas and the αt-RNN better captures the VWAP dynamics. From Table 2, we further observe relatively minor differences in the performance of the GRU vs. the αt-RNN, again suggesting that the reset gate and extra cellular memory in the LSTM provides no benefit. In this case, we find that the GRU has 10x the number of parameters as the αt-RNN with very marginal benefit. Furthermore we observe evidence of strong over-fitting in the GRU and LSTM vs. the αt-RNN, although overall we observe stronger over-fitting on this dataset than the bitcoin dataset. The ratio of training to test errors are respectively 0.159 and 0.187 vs. 0.278. The ratio of training to validation errors are 0.240 and 0.226 vs. 0.368.

FIGURE 5
www.frontiersin.org

FIGURE 5. The ten-step ahead forecasts of VWAPs are compared for various architectures using the tick-by-tick dataset. (top) The forecasts for each architecture and the observed out-of-sample time series. (bottom) The errors for each architecture over the same test period.

TABLE 2
www.frontiersin.org

TABLE 2. Theten-stepahead forecasting models for VWAPs are compared for various architectures using time series cross-validation. The half-life of the α-RNN is found to be 2.398 periods (α^=0.251).

6. Conclusion

Financial time series modeling has entered an era of unprecedented growth in the size and complexity of data which require new modeling methodologies. This paper demonstrates a general class of exponential smoothed recurrent neural networks (RNNs) which are well suited to modeling non-stationary dynamical systems arising in industrial applications such as algorithmic and high frequency trading. Application of exponentially smoothed RNNs to minute level Bitcoin prices and CME futures tick data demonstrates the efficacy of exponential smoothing for multi-step time series forecasting. These examples show that exponentially smoothed RNNs are well suited to forecasting, exhibiting few layers and needing fewer parameters, than more complex architectures such as GRUs and LSTMs, yet retaining the most important aspects needed for forecasting non-stationary series. These methods scale to large numbers of covariates and complex data. The experimental design and architectural parameters, such as the predictive horizon and model parameters, can be determined by simple statistical tests and diagnostics, without the need for extensive hyper-parameter optimization. Moreover, unlike traditional time series methods such as ARIMA models, these methods are shown to be unconditionally stable without the need to pre-process the data.

Data Availability Statement

The datasets and Python codes for this study can be found at https://github.com/mfrdixon/alpha-RNN.

Author Contributions

MD contributed the methodology and results, and JL contributed to the results section.

Funding

The authors declare that this study received funding from Intel Corporation. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix

1. GRUS and LSTMS1

1.1. GRUs

A GRU is given by:

smoothing:h˜t=α^th^t+(1α^t)h˜t1smoother update:α^t=σ(1)(Uαh˜t1+Wαxt+bα)hidden state update:h^t=σ(Uhr^th˜t1+Whxt+bh)reset update:r^t =σ(1)(Urh˜t1+Wrxt+br).

When viewed as an extension of our αt RNN model, we see that it has an additional reset, or switch, r^t, which forgets the dependence of h^t on the smoothed hidden state. Effectively, it turns the update for h^t from a plain RNN to a FFN and entirely neglect the recurrence. The recurrence in the update of h^t is thus dynamic. It may appear that the combination of a reset and adaptive smoothing is redundant. But remember that α^t effects the level of error correction in the update of the smoothed hidden state, h˜t, whereas r^t adjusts the level of recurrence in the unsmoothed hidden state h^t. Put differently, α^t by itself can not disable the memory in the smoothed hidden state (internal memory), whereas r^t in combination with α^t can. More precisely, when αt=1 and r^t=0, h˜t=h^t=σ(Whxt+bh) which is reset to the latest input, xt, and the GRU is just a FFN. Also, when αt=1 and r^t>0, a GRU acts like a plain RNN. Thus a GRU can be seen as a more general architecture which is capable of being a FFN or a plain RNN under certain parameter values.

These additional layers (or cells) enable a GRU to learn extremely complex long-term temporal dynamics that a vanilla RNN is not capable of. Lastly, we comment in passing that in the GRU, as in a RNN, there is a final feedforward layer to transform the (smoothed) hidden state to a response:

y^t=WYh˜t+bY.(A1)

1.2. LSTMs

LSTMs are similar to GRUs but have a separate (cell) memory, ct, in addition to a hidden state ht. LSTMs also do not require that the memory updates are a convex combination. Hence they are more general than exponential smoothing. The mathematical description of LSTMs is rarely given in an intuitive form, but the model can be found in, for example, Hochreiter and Schmidhuber (1997).

The cell memory is updated by the following expression involving a forget gate, α^t, an input gate z^t and a cell gate c^t

ct=α^tct1+z^tc^t.(A2)

In the terminology of LSTMs, the triple (α^t,r^t,z^t) are respectively referred to as the forget gate, output gate, and input gate. Our change of terminology is deliberate and designed to provided more intuition and continuity with RNNs and the statistics literature. We note that in the special case when z^t=1α^t we obtain a similar exponential smoothing expression to that used in our αt-RNN. Beyond that, the role of the input gate appears superfluous and difficult to reason with using time series analysis.

When the forget gate, α^t=0, then the cell memory depends solely on the cell memory gate update c^t. By the term α^tct1, the cell memory has long-term memory which is only forgotten beyond lag s if α^ts=0. Thus the cell memory has an adaptive autoregressive structure.

The extra “memory”, treated as a hidden state and separate from the cell memory, is nothing more than a Hadamard product:

ht=r^ttanh(ct),(A3)

which is reset if r^t=0. If r^t=1, then the cell memory directly determines the hidden state.

Thus the reset gate can entirely override the effect of the cell memory’s autoregressive structure, without erasing it. In contrast, the αt-RNN and the GRU has one memory, which serves as the hidden state, and it is directly affected by the reset gate.

The reset, forget, input and cell memory gates are updated by plain RNNs all depending on the hidden state ht.

Reset gate:r^t=σ(Urht1+Wrxt+br)Forget gate:α^t=σ(Uαht1+Wαxt+bα)Input gate:z^t=σ(Uzht1+Wzxt+bz)Cell memory gate:c^t =tanh(Ucht1+Wcxt+bc).

The LSTM separates out the long memory, stored in the cellular memory, but uses a copy of it, which may additionally be reset. Strictly speaking, the cellular memory has long-short autoregressive memory structure, so it would be misleading in the context of time series analysis to strictly discern the two memories as long and short (as the nomenclature suggests). The latter can be thought of as a truncated version of the former.

Footnotes

1Sample complexity bounds for RNNs have recently been derived by (Akpinar et al., 2019). Theorem 3.1 shows that for a recurrent units, inputs of length at most b, and a single real-valued output unit, the network requires only O(a4b/ϵ2) samples in order to attain a population prediction error of ε. Thus the more recurrent units required, the larger the amount of training data needed.

2By contrast, plain RNNs model stationary time series, and GRUs/LSTMs model non-stationary, but no hybrid exists which provides the modeler with the control to deploy either.

3The sequence of features is from the same time series as the predictor hence n=d=1.

4Long memory refers to autoregressive memory beyond the sequence length. This is also sometimes referred to as “stateful”. For avoidance of doubt, we are not suggesting that the α-RNN has an additional cellular memory, as in LSTMs.

5The half-life is the number of lags needed for the coefficient (1α)s to equal a half, which is s=1/log2(1α).

References

Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., et al. (2016). “TensorFlow: a system for large-scale machine learning,” in Proceedings of the 12th USENIX conference on operating systems design and implementation, Savannah, GA, November 2–4, 2016 (Berkeley, CA: OSDI’16) 265–283.

Google Scholar

Akpinar, N. J., Kratzwald, B., and Feuerriegel, S. (2019). Sample complexity bounds for recurrent neural networks with application to combinatorial graph problems. Preprint repository name [Preprint]. Available at: https://arxiv.org/abs/1901.10289.

Google Scholar

Bao, W., Yue, J., and Rao, Y. (2017). A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PloS One 12, e0180944–e0180924. doi:10.1371/journal.pone.0180944

PubMed Abstract | CrossRef Full Text | Google Scholar

Bayer, J. (2015). Learning sequence representations. MS dissertation. Munich, Germany: Technische Universität München.

Google Scholar

Borovkova, S., and Tsiamas, I. (2019). An ensemble of LSTM neural networks for high‐frequency stock market classification. J. Forecast. 38, 600–619. doi:10.1002/for.2585

CrossRef Full Text | Google Scholar

Borovykh, A., Bohte, S., and Oosterlee, C. W. (2017). Conditional time series forecasting with convolutional neural networks. Preprint repository name [Preprint]. Available at: https://arxiv.org/abs/1703.04691.

Google Scholar

Box, G., and Jenkins, G. M. (1976). Time series analysis: forecasting and control. Hoboken, NJ: Holden Day, 575

Chen, S., and Ge, L. (2019). Exploring the attention mechanism in LSTM-based Hong Kong stock price movement prediction. Quant. Finance 19, 1507–1515. doi:10.1080/14697688.2019.1622287 \

CrossRef Full Text | Google Scholar

Chung, J., Gülçehre, Ç., Cho, K., and Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. Preprint repository name [Preprint]. Available at: https://arxiv.org/abs/1412.3555.

Google Scholar

Dixon, M. (2018). Sequence classification of the limit order book using recurrent neural networks. J. Comput. Sci. 24, 277. doi:10.1016/j.jocs.2017.08.018

CrossRef Full Text | Google Scholar

Dixon, M. F., Polson, N. G., and Sokolov, V. O. (2019). Deep learning for spatio‐temporal modeling: dynamic traffic flows and high frequency trading. Appl. Stoch. Model. Bus Ind 35, 788–807. doi:10.1002/asmb.2399

CrossRef Full Text | Google Scholar

Dixon, M. (2021). Industrial Forecasting with Exponentially Smoothed Recurrent Neural Networks, forthcoming in Technometrics.

Google Scholar

Dixon, M., and London, J. (2021b). Alpha-RNN source code and data repository. Available at: https://github.com/mfrdixon/alpha-RNN.

Google Scholar

Glorot, X., and Bengio, Y. (2010). “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the international conference on artificial intelligence and statistics (AISTATS’10), Sardinia, Italy, Society for Artificial Intelligence and Statistics, 249–256

Google Scholar

Graves, A (2013). Generating sequences with recurrent neural networks. Preprint repository name [Preprint]. Available at: https://arxiv.org/abs/1308.0850.

Google Scholar

Hamilton, J. (1994). Time series analysis. Princeton, NJ: Princeton University Press, 592

Hochreiter, S., and Schmidhuber, J. (1997). Long short-term memory. Neural. Comput. 9, 1735–1780. doi:10.1162/neco.1997.9.8.1735

PubMed Abstract | CrossRef Full Text | Google Scholar

Kirchgässner, G., and Wolters, J. (2007). Introduction to modern time series analysis. Berlin, Heidelberg: Springer-Verlag, 277

Li, D., and Zhu, K. (2020). Inference for asymmetric exponentially weighted moving average models. J. Time Ser. Anal. 41, 154–162. doi:10.1111/jtsa.12464

CrossRef Full Text | Google Scholar

Mäkinen, Y., Kanniainen, J., Gabbouj, M., and Iosifidis, A. (2019). Forecasting jump arrivals in stock prices: new attention-based network architecture using limit order book data. Quant. Finance 19, 2033–2050. doi:10.1080/14697688.2019.1634277

CrossRef Full Text | Google Scholar

Pascanu, R., Mikolov, T., and Bengio, Y. (2012). “On the difficulty of training recurrent neural networks,” in ICML’13: proceedings of the 30th international conference on machine learning, 1310–1318. Available at: https://dl.acm.org/doi/10.5555/3042817.3043083.

Google Scholar

Sirignano, J., and Cont, R. (2019). Universal features of price formation in financial markets: perspectives from deep learning. Quant. Finance 19, 1449–1459. doi:10.1080/14697688.2019.1622295

CrossRef Full Text | Google Scholar

Tsay, R. S. (2010). Analysis of financial time series. 3rd Edn. Hoboken, NJ: Wiley

CrossRef Full Text

Keywords: recurrent neural networks, exponential smoothing, bitcoin, time series modeling, high frequency trading

Citation: Dixon M and London J (2021) Financial Forecasting With α-RNNs: A Time Series Modeling Approach. Front. Appl. Math. Stat. 6:551138. doi: 10.3389/fams.2020.551138

Received: 12 April 2020; Accepted: 13 October 2020;
Published: 11 February 2021.

Edited by:

Glenn Fung, Independent Researcher, Madison, United States

Reviewed by:

Alex Jung, Aalto University, Finland
Abhishake Rastogi, University of Potsdam, Germany

Copyright © 2021 Dixon and London. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Matthew Dixon, matthew.dixon@iit.edu

Download