2.1. Assumed Data Structure
Our analysis will be based on daily financial returns:
and a corresponding sequence of daily realised measures:
Realised measures are theoretically sound high-frequency, nonparametric-based estimators of the variation of the price path of an asset during the times at which the asset trades frequently on an exchange. Realised measures ignore the variation of prices overnight and sometimes the variation in the first few minutes of the trading day when recorded prices may contain large errors. The background to realised measures can be found in the survey articles by Andersen et al. (2009) and Barndorff-Nielsen and Shephard (2007).
The simplest realised measure is realised variance:
where tj, t are the normalised times of trades or quotes (or a subset of them) on the tth day. The theoretical justification of this measure is that if prices are observed without noise then, as minj|tj, t − tj−1, t|0, it consistently estimates the quadratic variation of the price process on the tth day. It was formalised econometrically by Andersen et al. (2001a) and Barndorff-Nielsen and Shephard (2002). In practice, market microstructure noise plays an important part and the above authors use 1- to 5-minute return data or a subset of trades or quotes (e.g. every 15th trade) to mitigate the effect of the noise. Hansen and Lunde (2006) systematically study the impact of noise on realised variance. If a subset of the data is used with the realised variance, then it is possible to average across many such estimators each using different subsets. This is called subsampling. When we report RV estimators we always subsample them to the maximum degree possible from the data, as this averaging is always theoretically beneficial, especially in the presence of modest amounts of noise.
Three classes of estimators which are somewhat robust to noise have been suggested in the literature: pre-averaging (Jacod et al., 2009), multiscale (Zhang, 2006; Zhang et al., 2005) and realised kernel (Barndorff-Nielsen et al., 2008).2 Here we focus on the realised kernel in the case where we use a Parzen weight function. It has the familiar form of a HAC type estimator (except that there is no adjustment for mean and the sums are not scaled by their sample size):
where k(x) is the Parzen kernel function:
It is necessary for H to increase with the sample size in order to consistently estimate the increments of quadratic variation in the presence of noise. We follow precisely the bandwidth choice of H spelt out in Barndorff-Nielsen et al. (2009a), to which we refer the reader for details. This realised kernel is guaranteed to be non-negative, which is quite important as some of our time series methods rely on this property.3
We will write a sequence of daily returns as r1, r2, …, rT, while we will use to denote low-frequency past data. A benchmark model for time-varying volatility is the GARCH model of Engle (1982) and Bollerslev (1986), where we assume that
This can be extended in many directions, for example allowing for statistical leverage. The persistence of this model, αG + βG, can be seen through the representation
since is a martingale difference with respect to .
Our focus is on additionally using some daily realised measures. The models we will analyse will be called ‘HEAVY models’ (High-frEquency-bAsed VolatilitY models) and are made up of the system
where is used to denote the past of rt and RMt, that is, the high-frequency dataset. The most basic example of this is the linear model
These semiparametric models could be extended to include on the right-hand side of both equations the variable (see the discussion above (5) in a moment) but we will see these variables typically test out. Hence it is useful to focus directly on the above model.4 Other possible extensions include adding a more complicated dynamic to (4), such as a component structure with short- and long-term components, a fractional model, allowing for statistical leverage type effects, or a Corsi (2009) type approximate long-memory model.
Note that (3) models the close-to-close conditional variance, while (4) models the conditional expectation of the open-to-close variation.
It will be convenient to have labels for the two equations in the HEAVY model. We call (3) the HEAVY-r model and (4) the HEAVY-RM model. Econometrically it is important to note that GARCH and HEAVY models are non-nested.
It is helpful to solve out explicitly stationary HEAVY-r model and GARCH models as
In applied work we will typically estimate β to be around 0.6–0.7 and ω to be small. Thus the HEAVY-r's conditional variance is roughly a small constant plus a weighted sum of very recent realised measures. In estimated GARCH models in our later empirical work βG is usually around 0.91 or above, so it has much more memory and thus it averages more data points.
Note that, unlike GARCH models, the HEAVY-r model has no feedback and so the properties of the realised measures determine the properties of .
The predictive model for the times series of realised measures is not novel. The work of Andersen et al. (2001a,b, 2003, 2007) typically looked at using least squares estimators of autoregressive cousins discussed in (4) or their logged transformed versions. These authors also emphasised the evidence for long memory in these time series and studied various ways of making inference for those types of processes. Some of this work uses the model of Corsi (2009), which is easy to estimate and mimics some aspects of long memory.
Engle (2002) estimated GARCHX type models, which specialise to (3), based on realised variances computed using 5-minute returns. He found the coefficient on to be small. He also fitted models like (4) but again including lagged square daily returns. He argues that the squared daily return helps forecast the realised variance, although there is some uncertainty over whether the effect is statistically significant (see his footnote 2). He did not, however, express (3)–(4) as a simple basis for a multistep-ahead forecasting system. Lu (2005) looked at extensions of GARCH models allowing the inclusion of lagged realised variance. He provides extensive empirical analysis of these GARCHX models.
Engle and Gallo (2006) extended Engle (2002) to look at multiple volatility indicators, trying to pool information across many indicators including daily ranges, rather than focusing solely on theoretically sound high-frequency-based statistics. They then relate this to the VIX. In that paper they do study multistep-ahead forecasting using a trivariate system which has daily absolute returns, daily range and realised variance (computed using 5-minute returns for the S&P500). Their estimated models are quite sophisticated with, again, daily returns playing a large role in predicting each series. These results are at odds with our own empirical experience expressed in Section 4. Some clues as to why this might be the case can be seen from their Table I, which shows realised volatility having roughly the same average level as absolute returns and daily range but realised volatility being massively more variable and having a very long right-hand tail. Further, their out-of-sample comparison was based only on 217 observations, which makes their analysis somewhat noisy. Perhaps these two features distracted from the power and simplicity of using realised measures in HEAVY type models.
Table I. A description of the ‘OMI's realised library’, version 0.1. The table shows how each measure is built and the length of time series available, denoted T. ‘Med dur’ denotes the median duration in seconds between price updates during September 2008 in our database. All data series stop on 27 March 2009
|Asset||Med dur||Start date||T||Asset||Med dur||Start date||T|
|Dow Jones Industrials||2||2-1-1996||3278||MSCI Australia||60||2-12-1999||2323|
|Nasdaq 100||15||2-1-1996||3279||MSCI Belgium||60||1-7-1999||2442|
|S&P 400 Midcap||15||2-1-1996||3275||MSCI Brazil||60||4-10-2002||1587|
|S&P 500||15||2-1-1996||3284||MSCI Canada||60||12-2-2001||2013|
|Russell 3000||15||2-1-1996||3279||MSCI Switzerland||60||9-6-1999||2434|
|Russell 1000||15||2-1-1996||3279||MSCI Germany||60||1-7-1999||2448|
|Russell 2000||15||2-1-1996||3281||MSCI Spain||60||1-7-1999||2423|
|CAC 40||30||2-1-1996||3322||MSCI France||60||1-7-1999||2455|
|FTSE 100||15||20-10-1997||2862||MSCI UK||60||8-6-1999||2451|
|German DAX||15||2-1-1996||3317||MSCI Italy||60||1-7-1999||2437|
|Italian MIBTEL||60||3-7-2000||2194||MSCI Japan||15||2-12-1999||2240|
|Milan MIB 30||60||2-1-1996||3310||MSCI South Korea||60||3-12-1999||2263|
|Nikkei 250||60||5-1-1996||3177||MSCI Mexico||60||4-10-2002||1612|
|Spanish IBEX||5||2-1-1996||3288||MSCI Netherlands||60||1-7-1999||2454|
|S&P TSE||15||31-12-1998||2546||MSCI World||60||11-2-2001||2101|
|British pound||2||3-1-1999||2584|| || || || |
|Euro||1||3-1-1999||2600|| || || || |
|Swiss franc||3||3-1-1999||2579|| || || || |
|Japanese yen||2||3-1-1999||2599|| || || || |
Brownlees and Gallo (2009) look at risk management in the context of exploiting high-frequency data. Their model, in Section 5 of their paper, links the conditional variance of returns to an affine transform of the predicted realised measure. In particular, their model has a HEAVY type structure but instead of using ht = ω+ αRMt−1 + βht−1 they model ht = ωB + αBµt. That is, they place in the HEAVY-r equation a smoothed version µt of the lagged realised measures where the smoothing is chosen to perform well in the HEAVY-RM equation, rather than the raw version which is then smoothed through the role of the momentum parameter β (which is optimally chosen to perform well in the HEAVY-r equation). Although these models are distinct, they have quite a lot of common thinking in their structure. Maheu and McCurdy (2009) have similarities with Brownlees and Gallo (2009), but focusing on an even more tightly parameterised model working with open-to-close daily returns (i.e., ignoring overnight effects) where realised variance captures much of the variation of the asset price. Giot and Laurent (2004) looks at some similar types of models. Bollerslev et al. (2009) model multiple volatility indicators and daily returns, where the return model has a conditional variance which is contemporaneous realised variance.
Finally, for some data the realised measure is not enough to entirely crowd out the lagged squared daily returns. In that case it makes sense to augment the HEAVY-r model into its extended version:
This could be thought of as a GARCHX type model, but that name suggests it is the squared returns which drives the model, whereas in fact in our empirical work it is the lagged realised measure which does almost all the work at moving around the conditional variance, even on the rare occasions that γX is estimated to be statistically significant. There seems little point in extending the HEAVY-RM model in the same way.