SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information

This paper studies in some detail a class of high-frequency-based volatility (HEAVY) models. These models are direct models of daily asset return volatility based on realised measures constructed from high-frequency data. Our analysis identifies that the models have momentum and mean reversion effects, and that they adjust quickly to structural breaks in the level of the volatility process. We study how to estimate the models and how they perform through the credit crunch, comparing their fit to more traditional GARCH models. We analyse a model-based bootstrap which allows us to estimate the entire predictive distribution of returns. We also provide an analysis of missing data in the context of these models. Copyright © 2010 John Wiley & Sons, Ltd.


1. INTRODUCTION

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information

This paper analyses the performance of some predictive volatility models built to exploit high-frequency data. This is carried out through the development of a class of models we call high-frequency-based volatility (HEAVY) models, which are designed to harness high-frequency data to make multistep-ahead predictions of the volatility of returns. These models allow for both mean reversion and momentum. They are somewhat robust to certain types of structural breaks and adjust rapidly to changes in the level of volatility. The models are run across periods where the level of volatility has varied substantially to assess their ability to perform in stressful environments.

Our approach to inference will be based on the use of the ‘Oxford-Man Institute's realised library’ of historical volatility statistics, constructed using high-frequency data. Such statistics are based on a variety of theoretically sound non-parametric estimators of the daily variation of prices. In particular, it includes two estimators of interest to us. The first is realised variance, which was systematically studied by Andersen et al. (2001a) and Barndorff-Nielsen and Shephard (2002). The second, which has some robustness to the effect of market microstructure effects, is realised kernel, which was introduced by Barndorff-Nielsen et al. (2008). Alternatives to the realised kernel include the multiscale estimators of Zhang et al. (2005) and Zhang (2006) and the pre-averaging estimator of Jacod et al. (2009).1

The focus of this paper is on predictive models, rather than on non-parametric measurement of past volatility. Torben Andersen, Tim Bollerslev and Frank Diebold, with various co-authors, have carried out important work on looking at predicting volatility using realised variances. Typically they fit reduced-form time series models of the sequence of realised variances—e.g. autoregressions or long-memory models on the realised volatilities or their logged versions. Examples of this work include Andersen et al. (2001a,b, 2003, 2007).

The approach we follow in this paper is somewhat different. We build models out of the intellectual insights of the ARCH literature pioneered by Engle (1982) and Bollerslev (1986), but bolster them with high-frequency information. The resulting models will be called HEAVY models. These models also use ideas generated by Engle (2002), Engle and Gallo (2006) and Cipollini et al. (2007) in their work on pooling information across multiple volatility indicators and the paper by Brownlees and Gallo (2009) on risk management using realised measures. Our analysis can be thought of as taking a small subset of some of the Engle et al. models and analysing them in depth for a specific purpose, looking at their performance over many assets. Our model structure is very simple, which allows us to cleanly understand its general features, strengths and potential weaknesses. We provide no new contribution to estimation theory, simply using existing results on quasi-likelihoods. We show that when we marginalise out the effect of the realised measures, HEAVY models of squared returns have some similarities with the component GARCH model of Engle and Lee (1999). However, HEAVY models are much easier to estimate as they bring two sources of information to identify the longer-term component of volatility. We further find that the additional information in the realised measure generates out-of-sample gains, which are particularly strong when the parameters of the model are estimated to match the prediction horizon, using so-called ‘direct projection’.

The structure of this paper is as follows. In Section 2 we will define HEAVY models, which use realised measures as the basis for multi-period-ahead forecasting of volatility. We provide a detailed analysis of these models. In Section 3 we detail the main properties of ‘Oxford-Man Institute's realised library’ which we use throughout the paper. In Section 4 we fit the HEAVY models to the data and compare their predictions to those familiar from GARCH processes. Section 5 discusses possible extensions. Section 6 draws some conclusions.

2. HEAVY MODELS

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information

2.1. Assumed Data Structure

Our analysis will be based on daily financial returns:

  • equation image

and a corresponding sequence of daily realised measures:

  • equation image

Realised measures are theoretically sound high-frequency, nonparametric-based estimators of the variation of the price path of an asset during the times at which the asset trades frequently on an exchange. Realised measures ignore the variation of prices overnight and sometimes the variation in the first few minutes of the trading day when recorded prices may contain large errors. The background to realised measures can be found in the survey articles by Andersen et al. (2009) and Barndorff-Nielsen and Shephard (2007).

The simplest realised measure is realised variance:

  • equation image(1)

where tj, t are the normalised times of trades or quotes (or a subset of them) on the tth day. The theoretical justification of this measure is that if prices are observed without noise then, as minj|tj, ttj−1, t|[DOWNWARDS ARROW]0, it consistently estimates the quadratic variation of the price process on the tth day. It was formalised econometrically by Andersen et al. (2001a) and Barndorff-Nielsen and Shephard (2002). In practice, market microstructure noise plays an important part and the above authors use 1- to 5-minute return data or a subset of trades or quotes (e.g. every 15th trade) to mitigate the effect of the noise. Hansen and Lunde (2006) systematically study the impact of noise on realised variance. If a subset of the data is used with the realised variance, then it is possible to average across many such estimators each using different subsets. This is called subsampling. When we report RV estimators we always subsample them to the maximum degree possible from the data, as this averaging is always theoretically beneficial, especially in the presence of modest amounts of noise.

Three classes of estimators which are somewhat robust to noise have been suggested in the literature: pre-averaging (Jacod et al., 2009), multiscale (Zhang, 2006; Zhang et al., 2005) and realised kernel (Barndorff-Nielsen et al., 2008).2 Here we focus on the realised kernel in the case where we use a Parzen weight function. It has the familiar form of a HAC type estimator (except that there is no adjustment for mean and the sums are not scaled by their sample size):

  • equation image(2)

where k(x) is the Parzen kernel function:

  • equation image

It is necessary for H to increase with the sample size in order to consistently estimate the increments of quadratic variation in the presence of noise. We follow precisely the bandwidth choice of H spelt out in Barndorff-Nielsen et al. (2009a), to which we refer the reader for details. This realised kernel is guaranteed to be non-negative, which is quite important as some of our time series methods rely on this property.3

2.2. Definitions

We will write a sequence of daily returns as r1, r2, …, rT, while we will use equation image to denote low-frequency past data. A benchmark model for time-varying volatility is the GARCH model of Engle (1982) and Bollerslev (1986), where we assume that

  • equation image

This can be extended in many directions, for example allowing for statistical leverage. The persistence of this model, αG + βG, can be seen through the representation

  • equation image

since equation image is a martingale difference with respect to equation image.

Our focus is on additionally using some daily realised measures. The models we will analyse will be called ‘HEAVY models’ (High-frEquency-bAsed VolatilitY models) and are made up of the system

  • equation image

where equation image is used to denote the past of rt and RMt, that is, the high-frequency dataset. The most basic example of this is the linear model

  • equation image(3)
  • equation image(4)

These semiparametric models could be extended to include on the right-hand side of both equations the variable equation image (see the discussion above (5) in a moment) but we will see these variables typically test out. Hence it is useful to focus directly on the above model.4 Other possible extensions include adding a more complicated dynamic to (4), such as a component structure with short- and long-term components, a fractional model, allowing for statistical leverage type effects, or a Corsi (2009) type approximate long-memory model.

Note that (3) models the close-to-close conditional variance, while (4) models the conditional expectation of the open-to-close variation.

It will be convenient to have labels for the two equations in the HEAVY model. We call (3) the HEAVY-r model and (4) the HEAVY-RM model. Econometrically it is important to note that GARCH and HEAVY models are non-nested.

It is helpful to solve out explicitly stationary HEAVY-r model and GARCH models as

  • equation image

In applied work we will typically estimate β to be around 0.6–0.7 and ω to be small. Thus the HEAVY-r's conditional variance is roughly a small constant plus a weighted sum of very recent realised measures. In estimated GARCH models in our later empirical work βG is usually around 0.91 or above, so it has much more memory and thus it averages more data points.

Note that, unlike GARCH models, the HEAVY-r model has no feedback and so the properties of the realised measures determine the properties of equation image.

The predictive model for the times series of realised measures is not novel. The work of Andersen et al. (2001a,b, 2003, 2007) typically looked at using least squares estimators of autoregressive cousins discussed in (4) or their logged transformed versions. These authors also emphasised the evidence for long memory in these time series and studied various ways of making inference for those types of processes. Some of this work uses the model of Corsi (2009), which is easy to estimate and mimics some aspects of long memory.

Engle (2002) estimated GARCHX type models, which specialise to (3), based on realised variances computed using 5-minute returns. He found the coefficient on equation image to be small. He also fitted models like (4) but again including lagged square daily returns. He argues that the squared daily return helps forecast the realised variance, although there is some uncertainty over whether the effect is statistically significant (see his footnote 2). He did not, however, express (3)–(4) as a simple basis for a multistep-ahead forecasting system. Lu (2005) looked at extensions of GARCH models allowing the inclusion of lagged realised variance. He provides extensive empirical analysis of these GARCHX models.

Engle and Gallo (2006) extended Engle (2002) to look at multiple volatility indicators, trying to pool information across many indicators including daily ranges, rather than focusing solely on theoretically sound high-frequency-based statistics. They then relate this to the VIX. In that paper they do study multistep-ahead forecasting using a trivariate system which has daily absolute returns, daily range and realised variance (computed using 5-minute returns for the S&P500). Their estimated models are quite sophisticated with, again, daily returns playing a large role in predicting each series. These results are at odds with our own empirical experience expressed in Section 4. Some clues as to why this might be the case can be seen from their Table I, which shows realised volatility having roughly the same average level as absolute returns and daily range but realised volatility being massively more variable and having a very long right-hand tail. Further, their out-of-sample comparison was based only on 217 observations, which makes their analysis somewhat noisy. Perhaps these two features distracted from the power and simplicity of using realised measures in HEAVY type models.

Table I. A description of the ‘OMI's realised library’, version 0.1. The table shows how each measure is built and the length of time series available, denoted T. ‘Med dur’ denotes the median duration in seconds between price updates during September 2008 in our database. All data series stop on 27 March 2009
AssetMed durStart dateTAssetMed durStart dateT
Dow Jones Industrials22-1-19963278MSCI Australia602-12-19992323
Nasdaq 100152-1-19963279MSCI Belgium601-7-19992442
S&P 400 Midcap152-1-19963275MSCI Brazil604-10-20021587
S&P 500152-1-19963284MSCI Canada6012-2-20012013
Russell 3000152-1-19963279MSCI Switzerland609-6-19992434
Russell 1000152-1-19963279MSCI Germany601-7-19992448
Russell 2000152-1-19963281MSCI Spain601-7-19992423
CAC 40302-1-19963322MSCI France601-7-19992455
FTSE 1001520-10-19972862MSCI UK608-6-19992451
German DAX152-1-19963317MSCI Italy601-7-19992437
Italian MIBTEL603-7-20002194MSCI Japan152-12-19992240
Milan MIB 30602-1-19963310MSCI South Korea603-12-19992263
Nikkei 250605-1-19963177MSCI Mexico604-10-20021612
Spanish IBEX52-1-19963288MSCI Netherlands601-7-19992454
S&P TSE1531-12-19982546MSCI World6011-2-20012101
British pound23-1-19992584    
Euro13-1-19992600    
Swiss franc33-1-19992579    
Japanese yen23-1-19992599    

Brownlees and Gallo (2009) look at risk management in the context of exploiting high-frequency data. Their model, in Section 5 of their paper, links the conditional variance of returns to an affine transform of the predicted realised measure. In particular, their model has a HEAVY type structure but instead of using ht = ω+ αRMt−1 + βht−1 they model ht = ωB + αBµt. That is, they place in the HEAVY-r equation a smoothed version µt of the lagged realised measures where the smoothing is chosen to perform well in the HEAVY-RM equation, rather than the raw version which is then smoothed through the role of the momentum parameter β (which is optimally chosen to perform well in the HEAVY-r equation). Although these models are distinct, they have quite a lot of common thinking in their structure. Maheu and McCurdy (2009) have similarities with Brownlees and Gallo (2009), but focusing on an even more tightly parameterised model working with open-to-close daily returns (i.e., ignoring overnight effects) where realised variance captures much of the variation of the asset price. Giot and Laurent (2004) looks at some similar types of models. Bollerslev et al. (2009) model multiple volatility indicators and daily returns, where the return model has a conditional variance which is contemporaneous realised variance.

Finally, for some data the realised measure is not enough to entirely crowd out the lagged squared daily returns. In that case it makes sense to augment the HEAVY-r model into its extended version:

  • equation image(5)

This could be thought of as a GARCHX type model, but that name suggests it is the squared returns which drives the model, whereas in fact in our empirical work it is the lagged realised measure which does almost all the work at moving around the conditional variance, even on the rare occasions that γX is estimated to be statistically significant. There seems little point in extending the HEAVY-RM model in the same way.

2.3. Representations and Dynamics

2.3.1. Multiplicative Representation

The vector multiplicative representation of HEAVY models rewrites (3) and (4) as

  • equation image

Such representations are the key behind the work of Engle (2002) and Engle and Gallo (2006). They are powerful as (εt, ηt)′− (1, 1)′ is a martingale difference with respect to equation image.5

The dynamic structure of the bivariate model can be gleaned from writing

  • equation image

Hence this process is driven by a common factor RMt − µt, which is itself a martingale difference sequence with respect to equation image.

The memory in the HEAVY model is governed by

  • equation image

This has two eigenvalues (e.g. Golub and Van Loan, 1989, p. 333): β, which we call a momentum parameter (a justification for this name will be given shortly), and αR + βR, which is the persistence parameter of the realised measure. In empirical work we will typically see β to be around 0.6 and the persistence parameter being close to but slightly less than one, so αR + βR governs the implied memory of equation image at longer lags. The persistence parameter will be close to that seen for estimated αG + βG for GARCH models.

The role of β is interesting. In typical GARCH models the main feature is that the current value of conditional variance monotonically mean reverts to the long-run average value as the forecast horizon increases. In HEAVY models this is not the case because of β.

2.3.2. Dynamics of the equation image Process

The HEAVY model can be solved out to imply the autocovariance function of the squared returns. This seems of little practical interest but allows some theoretical insights.

Assume that αR, βR, β∈[0, 1) and αR + βR < 1. Define equation image, uRt = RMt − µt, which under the model are martingale difference sequences with respect to equation image. We can write out the process for the equation image from a HEAVY model as

  • equation image

where L is the lag operator. Therefore

  • equation image

Likewise:

  • equation image

Combining delivers the result

  • equation image(6)

where

  • equation image

If we assume that

  • equation image

exists then ξt has a zero-mean weak MA(2) representation and equation image is weak GARCH(2,2) in the sense of Drost and Nijman (1993). The autoregressive roots of equation image are β and αR + βR, so are real and positive. A biproduct of the derivation of these results is the VARMA(1,1) representation

  • equation image

and the equilibrium correction form (see Hendry, 1995):

  • equation image(7)

An important aspect of the above result is that the memory parameters in the MA(2) depend upon the covariance matrix of (ut, uRt).

The weak GARCH(2,2) representation has some similarities with the component model of Engle and Lee (1999, equations (2.4) and (2.5)), which models

  • equation image

The qt process is called the long-term component and equation image the transitory component of the conditional variance. Thus we expect ρC to be close to one and αC + βC to be substantially less than one.

2.3.3. Momentum

An importance aspect of the marginal equation image process is that

  • equation image(8)

This makes plain the role of β in generating momentum. It can push αR + βR + β above one, heightening significant moves in the volatility, while αR + βR < 1 causes it to mean revert. If β = 0 then equation image becomes a weak GARCH(1,2) and has no momentum, although the realised measure still drives volatility. The component model of Engle and Lee (1999) is also a weak GARCH(1,2) if ρC = 0. The sophisticated model of Engle and Gallo (2006) is capable of generating momentum effects, of course.

If βR = β then

  • equation image

so we can divide through by (1 − βRL) to produce

  • equation image

Hence under that constraint the equation image is a weak GARCH(1,1) model.

2.3.4. Integrated HEAVY Models

The marginal process (8) can be rewritten in equilibrium correction form as

  • equation image

where Δ is the difference operator. In practice the coefficients on the level and difference are likely to be slightly negative and close to β, respectively.

Clements and Hendry (1999) have argued that most economic forecasting failure is due to shifts in long-run relationships and so this can be mitigated by imposing unit roots on the model. In this context this means setting (1 − β)(1 − αR − βR)to be zero. In order to avoid β being set to one, this is achieved by setting αR + βR = 1, and killing the intercept ωR (otherwise the intercept becomes a trend slope). The resulting forecasting model would then be based around

  • equation image

which has momentum but no mean reversion. This type of model would not be upset by structural changes in the level of the process. Imposing the unit root in GARCH type models is usually associated with the work of RiskMetrics, but that analysis does not have any momentum effects. Hence such a suggestion looks novel in the context of volatility models. It would imply using a HEAVY model of the type, for example, of

  • equation image(9)
  • equation image(10)

We call this the ‘integrated HEAVY model’. We will see later that this very simple model can generate reliable multiperiod forecasts.

2.3.5. Iterative Multistep-Ahead Forecasts

Multistep-ahead forecasts of volatility are very important for asset allocation or risk assessment since these tasks are usually carried out over multiple days. For one-step-ahead forecasts of volatility we only need (3), but for the multistep equation (4) plays a central role.

For s ≥ 0, from the martingale difference representation, we have

  • equation image(11)

Write ϑ = (αR + βR). It has two roots β and αR + βR. Further

  • equation image

Of course, of interest is the integrated variance prediction equation image. We will assume this can be simplified to

  • equation image

which would mean (11) could be used to compute it.

2.3.6. Targeting Reparameterisation

In the case of a stationary HEAVY model there are some advantages in reparameterising the equations in the HEAVY model so the intercepts are explicitly related to the unconditional mean of squared returns and realised measures. In the HEAVY-RM model this is easy to do as

  • equation image(12)

so that E(RMt) = µR. For the HEAVY-r equation it is less clear since the realised measure is likely to be a biased downward measure of the daily squared return (due to overnight effects). Writing equation image then we can set

  • equation image(13)

Taken together we call (13) and (12) the ‘targeting parameterisation’ for the HEAVY model.

This parameterisation of the HEAVY model has the virtue that it is possible to use the estimators6

  • equation image

of µR, µ and κ. Thus this reparameterisation is the HEAVY extension of variance targeting introduced by Engle and Mezrich (1996). When these estimators are plugged into the quasi-likelihood functions it makes optimisation easier, as the dimension is smaller, but it does alter the resulting asymptotic standard errors. This is discussed in the next subsection.

2.4. Inference for HEAVY Based Models

2.4.1. Quasi-likelihood Estimation

Inference for HEAVY models is a simple application of multiplicative error models discussed by Engle (2002), who uses standard quasi-likelihood asymptotic theory.

The HEAVY model has two equations:

  • equation image

We will estimate each equation separately, which makes optimisation straightforward. No attempt will be made to pool information across the two equations, although more information is potentially available if this was attempted (see the analysis of Cipollini et al., 2007).

The first equation will be initially estimated using a Gaussian quasi-likelihood:

  • equation image(14)

where we take equation image.

The second equation will be estimated using the same structure with

  • equation image(15)

where we take equation image.

In inference we will regard the parameters as having no link between the HEAVY-r and HEAVY-RM models, i.e. (ω, ψ) and (ωR, ψR) are variation free (e.g. Engle et al., 1983), which we will see in the next subsection is important for inference. It then follows that equation-by-equation optimisation is all that is necessary to maximise the quasi-likelihood. This is convenient as existing GARCH type code can simply be used in this context. We will write θ = (ω, ψ′, ωR, ψR′)′ and the resulting maximum of the quasi-likelihoods as equation image.

The alternative targeting parameterisation has

  • equation image

so that E(RMt) = µR and equation image. This has the virtue that we can employ a two-step approach, first setting

  • equation image

and then we compute

  • equation image

This reduces the dimension of the optimisations by one each time; this has the disadvantage that the two equations are no longer variation-free, which complicates the asymptotic distribution.

2.4.2. Quasi-likelihood Based Asymptotic Distribution

Inference using robust standard errors is standard in this context of (14) and (15). We stack the scores so that

  • equation image

where θ = (λ′, λR′)′. Then if we denote the point in the parameter space where the model (3) and (4) holds as θ* then under the model

  • equation image

that is, mt(θ*) is a martingale difference sequence with respect to equation image. Under standard quasi-likelihood conditions we have

  • equation image

where the Hessian is

  • equation image(16)

and

  • equation image(17)

The block diagonality of (16) is due to the variation-free property of the parameters, while it is not necessary to use an HAC estimator in (17) due to the martingale difference features of the stacked scores. This is a straightforward application of quasi-likelihood theory and can be viewed as an extension of Bollerslev and Wooldridge (1992) and is discussed extensively in Cipollini et al. (2007).

The most important implication of the block diagonality of the Hessian (16) is that the equation-by-equation standard errors for the HEAVY-r and HEAVY-RM are correct, even when viewing the HEAVY model as a system. This means that standard software can be used to compute them.

When the two-step approach is used on the targeting parameterisation then the moment conditions change to

  • equation image

The moment conditions are no longer martingale difference sequences, but they do have a zero mean for all values of t at the true parameter point:

  • equation image

while equation image needs to be an HAC estimator applied to the time series of mtE).

2.4.3. Non-nested Tests

One natural way to assess the forecasting power of the HEAVY model is to compare it to that generated by the GARCH model. This can be assessed at distinct horizons by comparing the performance using the QLIK loss function:

  • equation image(18)

where equation image is the proxy used for the time t + s (latent) variance and equation image is some predictor made at time t − 1. This loss function has been shown to be robust to certain types of noise in the proxy in Patton (2009) and Patton and Sheppard (2009a). It will later be used to compare the forecast performance of non-nested volatility models. Also important is the cumulative loss function, which we take as

  • equation image

which is distinct from the cumulative sum of losses. This uses the s-period realised variance as the observations.

The temporal average (s + 1)-step-ahead relative loss between a HEAVY and GARCH model will be

  • equation image

where

  • equation image

Here ht+s|t−1 is the forecast from the HEAVY model, equation image is the corresponding GARCH forecast and f(x|µ, σ2) denotes a Gaussian density with mean µ and variance σ2, evaluated at x. The framework will allow both the HEAVY and GARCH model to be estimated using QML techniques. The HEAVY model will be favoured if equation image is negative.

equation image estimates Ls = E(Lt, s), s = 0, 1, …, S, for each s, the unconditional average likelihood ratio between the two models. The HEAVY model will be favoured at s-steps if Ls < 0 and the GARCH model if Ls > 0. We will say that the HEAVY model forecast-dominates the GARCH model if Ls < 0 for all s = 1, 2, …, S. ‘Weakly forecast-dominates’ means that Ls⩽0 for all s = 1, 2, …, S with at least one of the ⩽ relationships being a strict inequality. This approach follows the ideas of Cox (1961b) on non-nested testing using the Vuong (1989) and Rivers and Vuong (2002) implementation.7

The above scheme can be implemented if Lt, s (evaluated at their pseudo-true parameter values) is sufficiently weakly dependent to allow the parameter estimates of the HEAVY and GARCH models to obey a standard Gaussian central limit theorem (e.g. Rivers and Vuong, 2002). Then

  • equation image

where Vs is the long-run variance of the Lt, s. The scale Vs has to be estimated by an HAC estimator (e.g. Andrews, 1991).

2.4.4. Horizon-Tuned Estimation and Evaluation

Having multistep-ahead loss functions suggests separately estimating the model at each forecast horizon by minimising expected loss at that horizon. This way of tuning the model to produce multistep-ahead forecasts is called ‘direct forecasting’ and has been studied by, for example, Marcellino et al. (2006) and Ghysels et al. (2009). The former argue direct forecasting may be more robust to model misspecification than iterating one-period-ahead models, although they find iterative methods more effective in forecasting for macroeconomic variables in practice. Direct forecasting dates at least to Cox (1961a). Marcellino et al. (2006) provide an extensive discussion of the literature.

Minimising the QLIK multistep-ahead loss can be thought of as maximising a distinct quasi-likelihood for each value of s:

  • equation image

where the quasi-likelihood is the Gaussian likelihood based on multistep-ahead forecasts. This delivers the sequence of horizon-tuned estimators equation image, equation image, equation image, equation image, whose standard errors can be computed using the usual theory of quasi-likelihoods. In practice, because of the structure of our HEAVY model, by far the most important of these equations is the second one, which allows horizon tuning for the HEAVY-RM forecasts.8 The same exercise can be carried out for a GARCH model.

2.4.5. Bootstrapping

Like GARCH models, a drawback of HEAVY models is that they only specify the conditional means of equation image and RMt given equation image. It is sometimes helpful to give the entire forecast distributions:

  • equation image(19)

or

  • equation image(20)

A simple way of carrying this out is via a model-based bootstrap. We use the representation equation image, RMt = ηtµt, equation image, equation image and then assume that equation image. Typically these bivariate variables will be contemporaneously correlated. For equities we would expect a sharp negative correlation reflecting statistical leverage. If we had knowledge of Fζ, η it would be a trivial task to carry out model-based simulation from (19) or (20).

We can estimate the joint distribution function Fζ, η by simply taking the filtered (ht, µt)′ and computing the devolatilised9

  • equation image(21)

and computing the empirical distribution function equation image. Then we can sample with replacement pairs from this population,10 which can then be used to drive a simulated joint path of the pair (rt, RMt)′, (rt+1, RMt+1)′, …, (rt+s, RMt+s)′. Discarding the drawn realised measures gives us paths of daily returns rt, rt+1, …, rt+s. Carrying out this simulation many times approximates the predictive distributions.

3. OMI'S REALISED LIBRARY 0.1

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information

3.1. A List of Assets and Data Cleaning

This paper uses the database ‘Oxford-Man Institute's realised library’ version 0.1, which has been produced by Heber et al. (2009).11

The version 0.1 of the library currently starts on 2 January 1996 and finishes on 27 March 2009. Some of the series are available throughout this period, but quite a number start after 1996, as detailed in Table I. In total, the database covers 34 different assets. Some of these series are indexes computed by MSCI. Others are traded assets or indexes computed by other data providers computed in real time. Table I gives the basic features of the data used to compute the library, indicating the frequency of the base data used in the calculations.

For each asset the library currently records daily returns, daily subsampled realised variances and daily realised kernels. In this paper we use the daily returns and realised kernels in our modelling. If the market is closed or the data are regarded as being of unacceptably low quality for that asset, then the database records it as missing, except for days when all the markets are simultaneously closed, in which case the day is not recorded in the database. As a result, for example, Saturdays are never present in the library. Summary features of the library will be discussed in the next subsection.

Realised variances (1) are computed by first calculating 5-minute returns (using the last tick method) and subsampling this statistic using every 30 seconds.12 Realised kernels are computed in tick time using every available data point, after cleaning. Data cleaning is discussed in our data appendix at the end of this paper.

3.2. Summary Statistics for the Library

Table II gives summary statistics for the realised measures and squared daily returns for each asset. The table is split into three sections, which are raw indexes, MSCI indexes and exchange rates, all quoted against the US dollar.

Table II. Calculations use 100 times differences of the log price (i.e. roughly percent changes). Avol is the square root of the mean of 252 times either squared returns or the realised measure. It is the approximate annualised volatility. SD is the daily standard deviation of percent daily returns or realised measure. The same data are used to compute the ACFs (serial correlations) at 1 lag
Assetequation imageRealized varianceRealized kernel
 AvolSDACF1AvolSDSCF1AvolSDACF1
Dow Jones Industrials19.44.810.12515.21.940.66315.01.950.655
Nasdaq 10028.18.350.18017.82.220.66418.72.520.646
S&P 400 Midcap21.75.680.26013.51.900.80013.71.960.799
S&P 50020.85.460.20915.52.090.69915.92.140.701
Russell 300020.35.320.12714.31.860.69414.51.900.697
Russell 100020.45.380.12514.71.910.69214.91.940.695
Russell 200023.36.020.31313.21.850.71513.41.960.720
CAC 4023.75.950.23618.12.180.66218.32.210.669
FTSE 10020.74.660.22915.21.620.64515.61.740.620
German DAX25.16.570.16321.13.100.65921.33.220.626
Italian MIBTEL20.15.070.21813.11.340.66513.71.520.662
Milan MIB 3023.25.690.21416.51.840.62417.01.990.615
Nikkei 25024.96.960.24116.01.370.69116.51.480.668
Spanish IBEX23.76.570.29516.71.760.63916.51.730.655
S&P TSE20.95.540.29214.11.820.78514.31.890.774
MSCI Australia16.43.050.2298.80.530.7639.10.570.749
MSCI Belgium23.410.50.15916.41.660.71816.11.840.684
MSCI Brazil43.724.30.15528.56.300.79629.67.210.749
MSCI Canada19.55.050.32012.61.670.81913.11.880.761
MSCI Switzerland20.65.250.33014.51.440.72714.51.560.700
MSCI Germany25.76.940.16321.13.100.67720.82.990.692
MSCI Spain24.06.080.22517.51.840.69017.61.920.676
MSCI France23.96.290.23818.22.230.68218.42.320.669
MSCI UK20.04.950.23315.61.840.61515.71.890.649
MSCI Italy21.45.350.24716.01.820.67216.21.930.670
MSCI Japan23.76.400.27314.21.270.74614.41.260.755
MSCI South Korea32.09.630.13121.62.610.70021.92.800.682
MSCI Mexico29.611.80.14416.32.590.67517.52.870.678
MSCI Netherlands23.96.140.28117.72.090.73317.92.250.716
MSCI World17.74.220.25013.11.440.76613.61.680.691
British pound9.20.750.2159.80.510.8769.40.510.879
Euro10.40.790.10311.10.450.66810.50.450.658
Swiss franc11.00.910.13311.60.390.69010.80.380.650
Japanese yen10.91.320.13411.60.640.69811.20.630.696

The Avol number takes either squared returns or the realised measure and multiplies them by 252 and then averages the value over the sample period. We then square root the result and report it. This is so that the Avol number is on the scale of an annualised volatility, which is familiar in financial economics. It shows the raw common indexes have annualised volatility for returns of usually just over 20%, with the corresponding results for the realised variance measures typically being around 16% and the realised kernels around the same level. Of course, the realised measures miss out on the overnight return, which accounts for their lower level. The MSCI indexes have more variation in their Avol levels, sometimes going into the 30s and in one case the 40s. The overnight effects are large again. In the exchange rate case the Avols are lower for squared returns and in this case the realised measures have roughly the same average level—presumably as there is no overnight effect. The Avol for realised kernels is typically a little higher than for the realised variance, but the difference is very small.

The SD figures are standard deviations of percentage daily squared movements or realised measures, not scaled to present annualised quantities. They show much higher standard deviations for squared returns than for their realised measure cousins. The ACF figures are the serial correlation coefficients at one lag. This shows the modest degree of serial correlation of squared returns and much higher numbers of the realised variances and realised kernels. These are the expected results.

4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information

4.1. Estimated Models

In this section we will take each univariate series of returns and realised measures and fit a HEAVY model together with the targeting GARCH:

  • equation image

and the non-targeting GARCHX models. The HEAVY models are set up in their targeting parameterisation:

  • equation image

In the GARCH and HEAVY cases they are estimated using a two-step approach, using unconditional empirical moments for µG, µR and µ and then maximising the quasi-likelihoods for (αG, βG), (αR, βR) and (α, β). The same estimation strategy is used for the GARCH model, but for the GARCHX model optimisation of the quasi-likelihood is used for all the parameters in the model.

For multistep-ahead forecasts there are some arguments for imposing a unit root on the HEAVY-RM model, in which case we model

  • equation image(22)

which means it has no targeting features at all. It would seem illogical to want to impose targeting on HEAVY-r at the same time as using an integrated model for realised measures.

The results are presented in some detail in Table III for the dynamic parameters. In the HEAVY-r model the momentum parameter β is typically in the range from 0.6 to 0.75 but there are exceptions, which are typically exchange rates where there is very considerable memory. The HEAVY-RM models show a very large degree of persistence in the series, with αR being typically in the region of 0.35–0.45 and αR + βR being close to one. For currencies, using realised measures improves the fit of the model but the improvement is modest, as can be seen from Table IV.

Table III. Fit of GARCH and HEAVY models for various indexes and exchange rates. The cross-sectional median takes the median of the parameter estimates for the indexes. GARCH and HEAVY-RM models are estimated using tracking parameterisation. Integrated models are IGARCH and Int-HEAVY-RM
AssetHEAVY-rGARCHXGARCHHEAVY-RMIntegrated
 αβαXβXγXαGβGαRβRαGαR
Dow Jones Industrials0.4070.7370.4070.7370.0000.0820.9120.4110.5670.0620.336
Nasdaq 1000.7300.6580.4390.7440.0510.0810.9160.4280.5670.0630.349
S&P 400 Midcap0.8480.6410.2700.7940.0830.1000.8860.3920.6030.0730.333
S&P 5000.3780.7730.3780.7730.0000.0760.9180.4170.5640.0540.340
Russell 30000.4480.7470.4480.7470.0000.0810.9110.4030.5740.0590.313
Russell 10000.3970.7680.3970.7680.0000.0780.9160.4020.5770.0570.315
Russell 20000.9490.6780.2440.8120.1020.1060.8850.3870.6220.0770.322
CAC 400.5260.6740.5260.6740.0000.0810.9170.4170.5730.0670.350
FTSE 1000.6130.6560.6130.6560.0000.1050.8920.4410.5560.0850.369
German DAX0.4470.6730.4470.6730.0000.0930.9030.4570.5360.0750.376
Italian MIBTEL0.8060.6300.8060.6300.0000.1070.8890.5120.4860.0800.436
Milan MIB 300.4960.7480.3420.7790.0470.1020.8950.4840.5180.0750.417
Nikkei 2500.5080.7720.5080.7720.0000.0790.9050.3460.6410.0650.295
Spanish IBEX0.6400.6690.4810.7130.0350.1130.8850.3930.6030.0840.343
S&P TSE0.6430.6920.6370.6930.0020.0670.9300.3620.6350.0540.324
Index's median0.5260.6780.4470.7440.0000.0820.9050.4110.5730.0670.340
MSCI Australia0.2140.6450.9760.6680.0430.0980.8940.3240.6700.0690.292
MSCI Belgium0.7690.5680.3740.6920.0930.1430.8540.3990.6080.1050.359
MSCI Brazil0.6620.6520.6610.6530.0010.0960.8760.4330.5360.0710.375
MSCI Canada0.5150.7650.4850.7690.0090.0740.9140.3640.6300.0600.329
MSCI Switzerland0.6990.6380.6990.6380.0000.1310.8600.4740.5080.0930.425
MSCI Germany0.5680.5920.5680.5920.0000.1070.8850.4610.5290.0830.388
MSCI Spain0.5890.6590.5890.6590.0000.0900.9070.4170.5790.0670.365
MSCI France0.5960.6280.5960.6280.0000.0900.9080.4530.5430.0740.386
MSCI UK0.5820.6160.5820.6160.0000.1100.8860.4560.5430.0860.393
MSCI Italy0.5830.6590.5830.6590.0000.1000.8960.5370.4620.0750.467
MSCI Japan0.7410.7200.7410.7200.0000.0880.9020.4590.5330.0750.387
MSCI South Korea0.7650.6610.7650.6610.0000.0710.9280.4320.5640.0590.392
MSCI Mexico0.8720.7110.7230.7250.0320.0950.8850.3640.6240.0680.328
MSCI Netherlands0.5380.6780.5380.6780.0000.1050.8890.4530.5410.0840.396
MSCI World0.3390.7980.3390.7980.0000.0840.9100.3770.6100.0680.340
MSCI's median0.5960.6590.5890.6610.0000.0960.8940.4330.5430.0740.386
British pound0.1620.8100.1620.8100.0000.0420.9500.2830.6990.0350.264
Euro0.0550.9360.0340.9470.0130.0300.9690.2470.7460.0280.223
Swiss franc0.0460.9480.0450.9470.0020.0270.9710.2390.7480.0240.220
Japanese yen0.1730.7720.1730.7720.0000.0480.9340.3980.5520.0350.341
Currency's median0.1090.8730.1040.8790.0010.0360.9590.2650.7220.0310.244
Table IV. Twice the likelihood change by imposing restrictions on the model. Left-hand side shows twice the likelihood change compared to the GARCHX model. The right-hand side compares the unconstrained GARCH and HEAVY-RM models with those which impose a unit root
AssetCompare to extended HEAVY-rImpose unit rootNo momentum
 HEAVY–rGARCHGARCHHEAVY-RMβ = 0
Dow Jones Industrials0.0− 199.5− 48.4− 19.5− 56.6
Nasdaq 100− 15.9− 108.5− 31.1− 14.4− 72.9
S&P 400 Midcap− 64.6− 61.8− 61.4− 11.0− 88.8
S&P 5000.0− 211.1− 50.6− 17.9− 67.2
Russell 30000.0− 187.3− 49.8− 21.1− 61.3
Russell 10000.0− 186.3− 45.3− 20.0− 61.9
Russell 2000− 163.2− 64.9− 57.4− 13.3− 131.1
CAC 400.0− 149.1− 30.8− 14.5− 67.3
FTSE 1000.0− 125.5− 32.4− 12.3− 55.2
German DAX0.0− 153.4− 47.0− 16.0− 63.7
Italian MIBTEL0.0− 141.2− 40.5− 9.9− 38.1
Milan MIB 30− 16.5− 100.7− 48.3− 13.0− 75.6
Nikkei 2500.0− 116.5− 64.5− 9.9− 84.6
Spanish IBEX− 9.3− 113.9− 59.0− 12.1− 78.4
S&P TSE− 0.0− 120.8− 17.3− 5.6− 72.3
Index's median0.0− 125.5− 48.3− 13.3− 67.3
MSCI Australia− 6.6− 96.6− 31.2− 3.9− 55.8
MSCI Belgium− 22.7− 66.2− 60.2− 4.1− 56.9
MSCI Brazil0.0− 60.2− 35.5− 7.1− 23.6
MSCI Canada− 0.4− 75.0− 22.9− 4.4− 56.7
MSCI Switzerland0.0− 153.4− 65.8− 9.1− 32.7
MSCI Germany0.0− 136.9− 45.0− 10.7− 44.5
MSCI Spain0.0− 106.7− 31.5− 7.5− 44.5
MSCI France0.0− 158.3− 27.7− 9.4− 47.1
MSCI UK0.0− 134.3− 37.1− 9.3− 44.5
MSCI Italy0.0− 154.7− 38.3− 8.7− 35.4
MSCI Japan0.0− 111.8− 33.7− 6.2− 28.0
MSCI South Korea0.0− 118.6− 15.1− 4.1− 43.5
MSCI Mexico− 3.4− 61.2− 36.5− 3.5− 43.1
MSCI Netherlands0.0− 117.8− 40.8− 7.6− 46.8
MSCI World0.0− 92.9− 25.6− 6.3− 104.0
MSCI's median0.0− 111.8− 35.5− 7.1− 44.5
British pound0.0− 50.4− 16.0− 1.8− 28.3
Euro− 2.7− 18.5− 6.0− 1.6− 44.6
Swiss franc− 0.1− 33.0− 5.9− 1.7− 40.5
Japanese yen0.0− 67.4− 38.6− 8.4− 26.1
Currency's median− 0.0− 41.7− 11.0− 1.8− 34.4

When we allow for realised measures in the GARCH model, that is, we specify the GARCHX model, typically the γX parameter is estimated to be on its boundary at exactly zero. There are eight exceptions to this, but the use of robust standard errors (not reported here) suggest only two are statistically significant. These two are the S&P 400 Midcap and Russell 2000. In those cases the realised kernel may not have dealt correctly with the dependence in their high-frequency data induced by the staleness of the prices for some of the components of the indices.

Also given in the table is the median of the estimators for three blocks of the assets, which provides a guide to the typical behaviour. Finally, the table also records the estimate value of αR for the integrated HEAVY model. This does not change very much from the estimated HEAVY model, but typically there are small falls in the estimates.

Table IV shows the change in the log-likelihood function by moving to the HEAVY-r and GARCH models from the nesting GARCHX model. In the GARCH case the changes are always very large; in the HEAVY-r case the changes are usually zero. However, there are a couple of cases where the reduction in likelihood is quite large. The table also shows the impact on the likelihood by imposing unit roots on the GARCH and HEAVY-RM models. The effect on the HEAVY-RM model is more modest than in the GARCH case.

Table V shows the HEAVY's model's average in sample iterated multistep-ahead QLIK loss compared to the GARCH model, using the methodology discussed above (‘Iterative Multistep-Ahead Forecasts’). Here the parameters are estimated using the quasi-likelihood, which means they are tuned to perform best at one-step-ahead forecasting. The forecast horizon varies over 1, 2, 3, 5, 10 and 22 lags. Two models are fitted. The left-hand side shows the result for the standard HEAVY model, which is estimated using a targeting parameterisation. The right-hand side shows the corresponding result for the ‘integrated HEAVY’ model, which is discussed in (22). Recall that negative t-statistics indicate a statistically significant preference for HEAVY models. The final column examines the log-likelihood loss from excluding the smoothing parameter from the HEAVY-RM model (β = 0). In all cases the decrease in log-likelihood is substantial, indicating that averaging over the most recent 4 or 5 days is highly desirable.

Table V. In-sample likelihood ratio tests for losses generated by HEAVY and GARCH models. Negative values favour HEAVY models. Both models are estimated using the quasi-likelihood, i.e. tuned to one-step-ahead predictions
Assett-statistic for non-nested LR tests for iterative forecasts
 Horizon h = s + 1: HEAVY modelHorizon h = s + 1: Int HEAVY model
 1235102212351022
Dow Jones Industrials− 5.72− 3.79− 3.07− 2.98− 2.160.78− 5.65− 3.71− 3.02− 2.75− 2.400.03
Nasdaq 100− 2.49− 0.46− 0.34− 0.721.03− 0.42− 2.47− 0.46− 0.33− 0.571.25− 0.02
S&P 400 Midcap0.071.191.140.160.25− 0.410.161.211.150.380.720.64
S&P 500− 6.12− 4.50− 3.98− 4.14− 1.920.90− 6.01− 4.43− 3.91− 3.89− 1.510.81
Russell 3000− 5.69− 3.97− 3.25− 4.01− 1.82− 0.12− 5.52− 3.82− 3.20− 3.87− 1.75− 0.29
Russell 1000− 5.40− 3.88− 3.25− 3.88− 1.650.33− 5.25− 3.74− 3.20− 3.74− 1.670.06
Russell 20001.702.322.241.281.450.411.732.242.121.351.540.89
CAC 40− 4.43− 3.04− 2.32− 0.78− 0.171.56− 4.38− 2.96− 2.15− 0.70− 0.360.88
FTSE 100− 5.18− 3.34− 2.61− 1.71− 0.17− 0.10− 5.08− 3.19− 2.39− 1.62− 0.270.11
German DAX− 5.15− 3.40− 2.79− 1.10− 0.92− 0.47− 5.23− 3.40− 2.65− 0.68− 0.610.34
Italian MIBTEL− 4.13− 3.20− 3.22− 1.73− 0.86− 0.86− 4.02− 2.91− 2.66− 1.24− 0.14− 0.89
Milan MIB 30− 1.89− 0.98− 0.91− 0.17− 0.05− 0.08− 1.88− 0.89− 0.710.090.510.05
Nikkei 250− 3.87− 2.55− 2.06− 0.560.320.53− 3.63− 2.38− 1.75− 0.190.771.76
Spanish IBEX− 2.81− 2.51− 1.37− 0.63− 1.13− 0.61− 2.81− 2.46− 1.18− 0.53− 0.980.07
S&P TSE− 5.17− 4.44− 3.57− 2.23− 0.89− 0.23− 5.16− 4.40− 3.49− 2.04− 0.590.22
MSCI Australia− 3.14− 1.94− 2.57− 1.87− 2.35− 2.89− 3.14− 1.93− 2.54− 1.80− 1.70− 2.05
MSCI Belgium− 1.21− 1.21− 1.08− 1.75− 2.05− 2.14− 0.85− 1.04− 0.94− 1.59− 1.62− 0.58
MSCI Brazil− 3.54− 2.19− 1.40− 1.22− 1.35− 0.22− 3.31− 2.01− 1.01− 0.84− 0.490.45
MSCI Canada− 3.90− 3.15− 3.11− 2.47− 1.73− 1.03− 3.91− 3.14− 3.07− 2.34− 1.42− 0.43
MSCI Switzerland− 4.33− 3.01− 2.23− 1.94− 0.37− 1.50− 4.15− 2.87− 2.12− 1.880.130.50
MSCI Germany− 5.31− 4.50− 3.90− 2.45− 1.15− 1.45− 5.33− 4.43− 3.54− 1.64− 0.56− 0.07
MSCI Spain− 3.71− 2.59− 2.05− 1.22− 0.39− 0.55− 3.44− 2.36− 1.74− 1.06− 0.14− 1.05
MSCI France− 5.67− 4.56− 3.33− 1.69− 0.64− 0.06− 5.52− 4.31− 2.96− 1.33− 0.46− 0.08
MSCI UK− 5.54− 3.98− 3.20− 2.30− 0.42− 0.48− 5.17− 3.59− 2.92− 2.19− 0.47− 0.24
MSCI Italy− 5.38− 3.78− 3.32− 2.71− 1.02− 0.36− 5.29− 3.48− 2.96− 2.23− 0.63− 0.79
MSCI Japan− 5.30− 3.06− 2.28− 0.61− 0.090.62− 5.08− 2.90− 2.00− 0.250.311.44
MSCI South Korea− 4.79− 2.61− 2.29− 2.32− 0.492.74− 4.73− 2.53− 2.23− 2.25− 0.342.18
MSCI Mexico− 2.47− 1.79− 1.80− 1.21− 1.96− 1.26− 2.43− 1.73− 1.68− 1.03− 1.72− 1.04
MSCI Netherlands− 4.81− 3.34− 2.33− 2.14− 1.39− 1.46− 4.40− 3.06− 2.06− 1.79− 0.93− 0.57
MSCI World− 5.57− 4.37− 3.39− 2.02− 1.26− 0.37− 5.04− 3.97− 3.00− 1.41− 1.16− 0.10
British pound− 3.33− 2.99− 2.06− 1.81− 1.44− 2.25− 3.36− 2.99− 2.02− 1.72− 1.16− 1.45
Euro− 1.14− 0.75− 0.63− 0.36− 0.22− 0.16− 1.11− 0.71− 0.59− 0.29− 0.160.10
Swiss franc− 2.55− 2.82− 2.81− 2.08− 2.18− 2.32− 2.54− 2.82− 2.79− 2.00− 2.05− 1.86
Japanese yen− 2.97− 2.35− 1.30− 0.25− 0.790.65− 2.88− 2.20− 1.16− 0.32− 0.640.12

The results are striking. They shows that in sample and pointwise the standard HEAVY model forecast dominates the GARCH model, but that the out-performance gets weaker as the forecast horizon increases. The integrated HEAVY model performs slightly more poorly than the unconstrained HEAVY model.

This picture is remarkably stable across assets with two counter-examples: the mid-cap series Russell 2000 and the S&P 400 Midcap. These have lower quasi-likelihoods and this under-performance continues when applied at multistep-ahead periods.

4.2. Direct Forecasting

The above estimation strategy fixes the parameters at the QMLE values and uses these to iterate through the multistep-ahead forecast formula to produce multistep-ahead forecasts and corresponding estimated losses. We call this indirect estimation. We now move on to a second approach, which allows different parameters to be used at different forecast horizon, maximising the multistep-ahead forecast quasi-likelihood for the HEAVY-RM model. Recall this is called the direct parameter estimator.

We first focus on the estimated parameters which come out from this approach, highlighting results from the Dow Jones Industrials example. The left of Figure 1 shows a plot of the estimated memory in the HEAVY-RM and GARCH models:

  • equation image(23)

plotted against s when we use the quasi-likelihood, which is tuned to perform well at one step. We see that, although the estimated values of these parameters are not very different, at long lags the difference becomes magnified. By the time we are 1 month out the HEAVY-RM model wants to give around a half the weight on recent past data and half the weight on the unconditional mean. In the GARCH model the figures are very different; the model wants around 90% of the weight to come from the recent data and only 10% to come from the unconditional mean.

thumbnail image

Figure 1. Direct and indirect method for Dow Jones Industrial case. Estimates of (αR + βR)s + 1 and (αG + βG)s + 1 drawn against forecast horizon s + 1. The figure shows the impact of strong mean reversion on the HEAVY-RM model when it is indirectly estimated and the weaker mean reversion in the direct case. This figure is available in color online at www.interscience.wiley.com/journal/jae

Download figure to PowerPoint

Figure 1 also shows the profile of (23) now for the directly estimated parameters, tuning each estimator to the appropriate forecast horizon. When we do this the persistence of the HEAVY-RM model jumps up beyond the level of the GARCH model. This is caused by a reduction in αR from around 0.4 for small numbers of periods ahead to around 0.2 for longer periods ahead. As αR decreased, the rise in βR was sharper, leading to an increase in the estimated value of αR + βR for large s. The increase in the level of the curve for the GARCH model in comparison is similar.

When we compare the forecast performance of the directly estimated GARCH and HEAVY models using the QLIK loss functions we see in Table VI that the HEAVY models are systematically much better. This improvement is now sustained at quite long horizons and holds for standard HEAVY models and integrated versions.

Table VI. In-sample t-statistic-based LR tests comparing losses generated by the HEAVY and GARCH models. Negative values favour the HEAVY model. The left columns of each panel compare HEAVY and GARCH models using horizon tuned parameters and the right columns compare Integrated HEAVY against a standard GARCH model using horizon-tuned parameters
AssetPointwise comparisonCumulative comparison
 Direct HEAVY vs. Direct GARCHDirect Int. HEAVY vs. Direct GARCHDirect HEAVY vs. Direct GARCHDirect Int. HEAVY vs. Direct GARCH
 11022110225102251022
Dow Jones Industrials− 5.72− 3.34− 0.95− 5.65− 3.50− 0.30− 4.40− 4.32− 3.60− 4.48− 4.52− 3.71
Nasdaq 100− 2.49− 0.51− 0.54− 2.47− 0.230.12− 0.88− 0.17− 0.45− 0.790.030.02
S&P 400 Midcap0.070.550.240.160.811.000.780.800.540.891.061.15
S&P 500− 6.12− 4.52− 0.24− 6.01− 4.630.25− 5.43− 4.95− 2.84− 5.46− 5.12− 2.88
Russell 3000− 5.69− 4.24− 1.15− 5.52− 4.17− 0.27− 4.86− 4.61− 3.63− 4.78− 4.55− 3.30
Russell 1000− 5.40− 4.11− 0.69− 5.25− 4.17− 0.00− 4.75− 4.44− 3.09− 4.72− 4.52− 2.94
Russell 20001.701.560.811.731.661.172.011.971.591.991.991.76
CAC 40− 4.43− 0.98− 0.34− 4.38− 0.910.59− 2.98− 1.87− 1.40− 2.88− 1.79− 0.69
FTSE 100− 5.18− 1.97− 1.32− 5.08− 1.810.28− 3.46− 2.44− 2.35− 3.25− 2.08− 1.09
German DAX− 5.15− 1.18− 1.48− 5.23− 0.720.64− 3.70− 2.84− 2.84− 3.47− 2.04− 0.92
Italian MIBTEL− 4.13− 1.61− 1.52− 4.02− 1.08− 0.35− 3.17− 2.23− 2.32− 2.85− 1.84− 1.37
Milan MIB 30− 1.89− 0.37− 1.51− 1.88− 0.250.20− 1.07− 0.97− 1.73− 0.96− 1.08− 0.96
Nikkei 250− 3.87− 0.110.51− 3.630.181.03− 2.07− 1.05− 0.05− 1.84− 0.700.50
Spanish IBEX− 2.81− 0.90− 0.73− 2.81− 1.02− 0.11− 2.02− 2.27− 1.82− 1.96− 1.94− 0.85
S&P TSE− 5.17− 2.37− 1.83− 5.16− 2.24− 1.10− 4.14− 2.95− 2.50− 4.04− 2.92− 2.55
MSCI Australia− 3.14− 2.17− 2.84− 3.14− 2.09− 1.69− 2.51− 2.62− 3.42− 2.47− 2.45− 2.73
MSCI Belgium− 1.21− 1.89− 1.67− 0.85− 1.680.07− 1.60− 1.93− 2.23− 1.37− 1.68− 1.43
MSCI Brazil− 3.54− 1.560.03− 3.31− 1.040.74− 2.91− 2.03− 0.96− 2.47− 1.37− 0.24
MSCI Canada− 3.90− 2.41− 1.69− 3.91− 2.30− 1.02− 3.47− 2.71− 2.40− 3.41− 2.55− 2.20
MSCI Switzerland− 4.33− 1.95− 1.44− 4.15− 1.740.67− 3.10− 2.19− 2.14− 2.98− 1.71− 0.49
MSCI Germany− 5.31− 2.27− 1.50− 5.33− 1.510.48− 4.83− 3.03− 2.80− 4.37− 2.15− 1.13
MSCI Spain− 3.71− 1.30− 1.50− 3.44− 1.12− 0.73− 2.62− 1.82− 1.84− 2.38− 1.69− 1.37
MSCI France− 5.67− 1.61− 1.23− 5.52− 1.220.21− 4.25− 2.58− 2.20− 3.93− 2.05− 1.04
MSCI UK− 5.54− 2.43− 1.65− 5.17− 2.270.09− 3.84− 2.96− 2.54− 3.57− 2.59− 1.35
MSCI Italy− 5.38− 2.86− 2.19− 5.29− 2.43− 0.58− 4.10− 3.47− 3.72− 3.85− 3.52− 2.60
MSCI Japan− 5.30− 0.720.27− 5.08− 0.380.75− 2.88− 2.21− 1.17− 2.55− 1.65− 0.26
MSCI South Korea− 4.79− 2.301.21− 4.73− 2.131.06− 3.46− 2.71− 0.33− 3.39− 2.510.05
MSCI Mexico− 2.47− 1.47− 1.56− 2.43− 1.45− 1.27− 1.95− 2.12− 2.19− 1.90− 2.07− 2.27
MSCI Netherlands− 4.81− 2.14− 2.99− 4.40− 1.81− 0.83− 3.29− 2.59− 2.90− 2.99− 2.19− 1.81
MSCI World− 5.57− 2.25− 0.86− 5.04− 1.930.04− 4.05− 3.16− 2.69− 3.60− 2.86− 2.10
British pound− 3.33− 1.74− 2.20− 3.36− 1.69− 1.44− 2.60− 2.18− 2.65− 2.59− 2.12− 2.34
Euro− 1.140.05− 0.14− 1.110.090.11− 0.60− 0.24− 0.40− 0.55− 0.19− 0.26
Swiss franc− 2.55− 1.65− 2.50− 2.54− 1.61− 2.30− 2.66− 2.58− 3.17− 2.64− 2.56− 3.08
Japanese yen− 2.97− 1.78− 1.25− 2.88− 1.710.05− 2.22− 2.24− 2.36− 2.16− 1.98− 1.42

An important question is how well we forecast the variance of the sum of s period returns. Again the forecast out-performance of HEAVY models appears for nearly all assets and forecast horizons. The results are given in Table VI.

4.3. Out-of-Sample Performance

An out-of-sample exercise was conducted to assess the performance of HEAVY models in a more realistic scenario. All models were estimated using a moving window with a width of 4 years (1008 observations) and parameters were updated daily. Forecasts were then produced for one through 22 steps ahead. Table VII shows the results of this exercise based on two comparisons. The first comparison is based on direct estimation of both the HEAVY-RM model and its GARCH competitor. In both cases parameters were optimised by fitting the realised measure (HEAVY-RM) or squared return (GARCH) models at the forecasting horizon. All HEAVY models used the same HEAVY-r model, which was optimised for the one-step horizon. The second compares the performance of the Integrated HEAVY-RM specification with a standard GARCH, where both sets of parameters were optimised for one-step prediction. The standard HEAVY model based on one-step tuning is not included since the memory parameter chosen was often implausibly small. Neither the directly estimated HEAVY model nor the Integrated HEAVY suffers from this issue.

Table VII. Out-of-sample t-statistic-based LR tests comparing losses generated by the HEAVY and GARCH models. Negative values favour the HEAVY model. The left columns of each panel compare HEAVY and GARCH models using horizon-tuned parameters and the right columns compare Integrated HEAVY against a standard GARCH model using one-step-ahead tuned parameters
AssetPointwise comparisonCumulative comparison
 Direct HEAVY vs. Direct GARCHInt. HEAVY vs. GARCHDirect HEAVY vs. Direct GARCHInt. HEAVY vs. GARCH
 11022110225102251022
Dow Jones Industrials− 5.94− 2.740.39− 5.81− 3.04− 0.60− 5.19− 4.87− 2.83− 4.83− 4.75− 3.03
Nasdaq 100− 5.43− 1.00− 2.67− 5.28− 3.55− 2.55− 4.51− 3.50− 2.94− 4.33− 4.64− 3.45
S&P 400 Midcap− 2.87− 0.81− 2.50− 2.98− 0.07− 1.25− 2.01− 1.89− 1.29− 1.90− 2.47− 2.50
S&P 500− 6.55− 1.960.24− 6.57− 3.14− 0.34− 5.40− 4.55− 2.03− 5.12− 4.79− 2.67
Russell 3000− 6.00− 1.87− 0.88− 5.89− 3.48− 1.17− 5.29− 4.37− 2.64− 5.22− 5.23− 3.49
Russell 1000− 6.01− 1.82− 0.66− 5.90− 3.41− 0.91− 5.37− 4.36− 2.53− 5.24− 5.22− 3.24
Russell 2000− 0.970.20− 0.80− 1.070.24− 0.730.170.48− 0.42− 0.43− 0.11− 0.30
CAC 40− 4.82− 0.20− 2.08− 4.76− 1.06− 1.46− 4.31− 1.76− 1.72− 4.02− 2.72− 2.07
FTSE 100− 5.45− 2.02− 2.84− 5.57− 1.72− 2.13− 3.78− 2.90− 2.85− 3.85− 2.86− 2.44
German DAX− 3.96− 2.49− 3.57− 4.12− 2.11− 1.32− 3.84− 3.33− 4.02− 3.89− 3.25− 2.55
Italian MIBTEL− 2.87− 0.81− 2.50− 2.98− 0.07− 1.25− 1.88− 1.61− 2.61− 1.51− 0.71− 0.86
Milan MIB 30− 4.18− 1.13− 3.33− 4.28− 1.19− 1.41− 3.36− 2.94− 3.69− 3.27− 2.59− 2.32
Nikkei 250− 3.35− 0.64− 0.03− 3.36− 0.680.93− 3.74− 3.35− 0.37− 3.41− 2.76− 0.90
Spanish IBEX− 3.13− 0.52− 2.96− 3.19− 0.87− 1.28− 2.88− 2.10− 2.06− 2.54− 1.68− 1.33
S&P TSE− 3.29− 1.78− 0.46− 3.25− 1.030.63− 3.07− 2.36− 1.72− 2.91− 1.97− 0.53
MSCI Australia− 2.60− 2.15− 1.65− 2.61− 1.48− 1.25− 2.01− 2.10− 2.91− 1.95− 1.79− 1.49
MSCI Belgium− 3.28− 3.69− 3.29− 3.26− 2.79− 3.54− 3.16− 3.67− 4.79− 2.52− 2.63− 3.45
MSCI Brazil− 2.21− 1.520.54− 2.27− 1.65− 0.62− 1.58− 1.61− 0.97− 1.46− 1.57− 0.92
MSCI Canada− 3.41− 1.98− 1.49− 3.34− 1.040.17− 3.01− 2.30− 1.88− 2.81− 1.82− 0.85
MSCI Switzerland− 5.15− 2.22− 2.65− 5.13− 1.90− 2.91− 4.47− 3.34− 3.83− 4.64− 3.25− 3.81
MSCI Germany− 3.15− 3.67− 1.93− 3.18− 1.93− 1.64− 3.26− 3.46− 3.95− 2.83− 2.35− 1.97
MSCI Spain− 2.82− 1.39− 2.88− 2.84− 0.96− 1.28− 2.89− 2.38− 2.30− 2.50− 1.68− 1.23
MSCI France− 4.38− 2.06− 2.81− 4.39− 1.31− 2.01− 4.64− 2.91− 3.67− 3.99− 2.64− 2.12
MSCI UK− 4.30− 1.09− 3.13− 4.32− 1.56− 3.04− 3.79− 2.74− 2.29− 3.25− 2.60− 2.52
MSCI Italy− 4.08− 2.64− 2.88− 4.08− 1.40− 2.19− 3.37− 3.78− 4.53− 3.02− 2.38− 2.44
MSCI Japan− 2.73− 0.18− 0.25− 2.620.150.60− 2.72− 1.79− 0.58− 2.43− 1.44− 0.58
MSCI South Korea− 4.080.141.12− 4.10− 1.680.18− 2.65− 1.620.34− 3.06− 2.74− 1.38
MSCI Mexico− 2.23− 1.34− 0.63− 2.24− 1.28− 0.92− 1.53− 1.47− 0.89− 1.47− 1.43− 1.09
MSCI Netherlands− 4.58− 3.35− 3.08− 4.55− 2.36− 1.62− 4.21− 4.21− 3.54− 4.09− 3.28− 2.49
MSCI World− 3.30− 0.080.20− 3.59− 0.93− 1.15− 2.07− 1.22− 0.99− 2.41− 1.73− 1.29
British pound− 2.53− 1.53− 1.60− 2.59− 0.94− 0.97− 2.24− 1.90− 1.82− 2.16− 1.65− 1.24
Euro− 1.05− 0.030.70− 1.03− 0.69− 0.56− 0.65− 0.100.08− 0.85− 0.69− 0.58
Swiss franc− 2.09− 0.82− 2.33− 2.10− 1.58− 2.32− 2.12− 1.51− 1.94− 2.24− 2.03− 2.22
Japanese yen− 2.22− 1.12− 1.34− 2.22− 1.44− 0.54− 1.82− 1.76− 1.64− 1.75− 1.85− 1.23

The left panel contains pointwise comparisons which assess the forecasting performance at a specific horizon, where performance is assessed using Giacomini and White (2006) tests, which evaluate the loss of both the innovation and the parameter estimation uncertainty. These results strongly favour the HEAVY models in both cases, especially at shorter horizons. The results for the S&P 400 Midcap index and the Russell 2000 further highlight the strength of the HEAVY model—despite decidedly worse performance in full-sample comparisons, HEAVY models outperform GARCH models in out-of-sample evaluation. This difference is likely due to the higher signal-to-noise ratio of realised measures.

The right panel contains cumulative comparisons for the two sets of models. Cumulative loss measures the performance on the total variation over the forecast horizon, and so the one-step is identical to the pointwise (and so replaced by the five-step horizon). HEAVY models perform well at all horizons, with statistically significant out-performance in most series while never being outperformed by GARCH-based forecasts.

4.4. Parameter Stability

Figure 2 shows time series plots of the estimated HEAVY and GARCH parameters estimated using the quasi-likelihood based on a moving window of 4 years of data, recording the estimates at the time of the last data point in the sample. The top of the plot shows very dramatic percentage changes in the GARCH αG parameter but relatively modest movements in the corresponding HEAVY parameter αR. Percentage changes are important as the time variation in the conditional variance is scaled by α parameters.

thumbnail image

Figure 2. Recursive parameter estimates using a quasi-likelihood for GARCH and HEAVY model for the Dow Jones Industrial example. This figure is available in color online at www.interscience.wiley.com/journal/jae

Download figure to PowerPoint

The bottom of Figure 2 shows the rolling estimate of the persistence parameters for the GARCH model αG + βG and the HEAVY-RM model αR + βR. The latter shows consistently less memory than the former but, interestingly, the two sequences of parameter estimates are moving around in lock step. Figure 2 shows results for the α parameter. It is a volatile picture, but the percentage moves are actually quite modest.

4.5. Properties of the Innovations

One way of thinking about the performance of the model is by computing the one-step-ahead innovations from the model:

  • equation image

In this section we evaluate the performance using the quasi-likelihood criteria.

Figure 3 shows these innovations for the Dow Jones Index example, which is fairly typical of results we have seen for other series. At the top left-hand side of the figure we have a time series plot of equation image. It does not show much volatility clustering, but there are some quite large negative innovations, with a couple of days reporting falls which are larger than − 5. These occurred at the start of 1996 and at the start of 2007. There are no remarkable moves during the credit crunch.

thumbnail image

Figure 3. Innovations equation image and equation image from the HEAVY model fitted to the DJI. Top left: the HEAVY-r model innovations equation image, which should be roughly martingale difference sequences with unit variance. Top right is equation image, which should have unit conditional means and be uncorrelated. Bottom left is a cross-plot of equation image and equation image, while bottom right is the equivalent version mapped into copula spaces using the marginal empirical distribution functions to calculate the empirical copula measure. This figure is available in color online at www.interscience.wiley.com/journal/jae

Download figure to PowerPoint

At the top right-hand side of Figure 3 there is a time series plot of equation image, which has large moves at the same time as the large moves in equation image. This is confirmed at the bottom left-hand side of the figure, which cross-plots equation image and equation image, suggesting some dependence in the bottom right-hand quadrant. The bottom right side shows the empirical copula for equation image and equation image, from which it is hard to see much dependence, although there is little mass in the bottom left-hand quadrant and a cluster of points in the bottom right.

Summary statistics for the innovations for all the series are given in Table VIII. We have chosen not to report the estimated equation image and equation image as these are for all series extremely close to one. Here r denotes the estimated correlation coefficient and rs denotes Spearman's rank coefficient. We will first focus on the first row, the Dow Jones series. The raw correlation shows a large amount of negative correlation between the equation image and equation image for all the equity series. This negative dependence is a measure of statistical leverage—that is, falls in equity prices are associated with rises in volatility. For exchange rates the correlation is roughly zero. The Spearman's rank correlations show the same pattern. The final column reports the first-order autocorrelation of equation image, which was small but generally positive. This may indicate that a more complex specification could be justified for the HEAVY-RM model, which is a topic of ongoing research.

Table VIII. Descriptive statistics of the estimated innovations equation image and equation image from the fitted HEAVY model. Their empirical variance and mean were, respectively, very close to one and so are not reported here. The first five columns are estimated moments of their marginal distributions. r denotes the correlation, rs is the Spearman rank correlation coefficient and ρ is the first-order autocorrelation
Assetequation imageequation imageequation imageequation imageequation imageequation imageequation imageequation image
Dow Jones Industrials− 6.193.15− 0.3361.820.270− 0.313− 0.2800.008
Nasdaq 100− 5.904.25− 0.1491.660.264− 0.321− 0.3230.043
S&P 400 Midcap− 7.473.51− 0.3661.870.257− 0.351− 0.3340.036
S&P 500− 6.863.61− 0.3961.900.270− 0.331− 0.3120.027
Russell 3000− 7.013.97− 0.3381.880.276− 0.339− 0.3350.019
Russell 1000− 7.403.91− 0.3461.920.275− 0.337− 0.3290.021
Russell 2000− 7.083.48− 0.3851.830.285− 0.289− 0.2640.030
CAC 40− 4.373.64− 0.2121.530.262− 0.350− 0.3230.017
FTSE 100− 4.293.90− 0.3071.520.263− 0.330− 0.3120.012
German DAX− 5.303.79− 0.2161.590.259− 0.396− 0.3670.026
Italian MIBTEL− 4.903.40− 0.4641.630.255− 0.430− 0.4210.010
Milan MIB 30− 5.384.87− 0.0761.790.263− 0.334− 0.3390.012
Nikkei 250− 5.783.95− 0.3431.770.260− 0.198− 0.1620.043
Spanish IBEX− 6.955.19− 0.2371.920.262− 0.328− 0.2970.035
S&P TSE− 5.823.48− 0.2251.630.254− 0.282− 0.2860.045
MSCI Australia− 6.193.63− 0.3181.750.238− 0.244− 0.2070.040
MSCI Belgium− 5.783.04− 0.3911.770.239− 0.310− 0.2670.032
MSCI Brazil− 5.083.61− 0.1941.590.258− 0.327− 0.3110.041
MSCI Canada− 4.523.47− 0.2321.580.247− 0.309− 0.2980.064
MSCI Switzerland− 5.983.43− 0.4531.840.231− 0.396− 0.3460.012
MSCI Germany− 4.943.21− 0.3331.570.246− 0.390− 0.3700.042
MSCI Spain− 5.483.63− 0.2111.600.243− 0.312− 0.2970.033
MSCI France− 4.553.06− 0.2491.480.250− 0.345− 0.3350.027
MSCI UK− 4.713.17− 0.3811.600.251− 0.347− 0.3280.006
MSCI Italy− 4.443.17− 0.3921.560.241− 0.396− 0.3850.014
MSCI Japan− 5.953.41− 0.3511.690.235− 0.274− 0.2120.031
MSCI South Korea− 5.643.37− 0.2391.710.222− 0.233− 0.2290.001
MSCI Mexico− 5.193.75− 0.1071.740.241− 0.262− 0.2220.071
MSCI Netherlands− 5.003.23− 0.2961.550.242− 0.368− 0.3520.040
MSCI World− 5.364.34− 0.1971.620.259− 0.227− 0.2250.061
British pound− 3.583.76− 0.0611.510.170− 0.050− 0.0300.065
Euro− 4.203.480.0601.540.1960.0140.0170.053
Swiss franc− 4.493.91− 0.1821.570.184− 0.101− 0.0800.064
Japanese yen− 4.653.71− 0.3221.800.222− 0.193− 0.1280.028

Another features of the table which is interesting is that there is strong evidence that equation image has a negative skew and that the standard deviation of equation image is not far from two. The latter suggests that the marginal distribution of equation image is not very thick tailed. These results are common across different series except for the exchange rates which are closer to symmetry, except for the yen.

4.6. Volatility Hedgehog Plots

It is challenging to plot sequences of multistep-ahead volatility forecasts. We carry this out using what we call ‘volatility hedgehog plots’ and illustrate it through the credit crunch of late 2008. An example of this is Figure 4, which is calculated for the MSCI Canada series. It plots the time series of one-step-ahead forecasts from the HEAVY-r model ht; these are joined together using a thick solid red line. For a selected number of days (if all days are plotted then it is hard to see the details) we also draw off the one-step-ahead forecast the corresponding multistep-ahead forecast, drawn using a dashed line, over the next month. The corresponding results for the GARCH model are also shown using a solid line with added symbols, with the multistep-ahead forecasts being shown using a dotted line.

thumbnail image

Figure 4. Volatility hedgehog plot for annualised volatility for the MSCI Canada series. The hedgehog plots are given for both HEAVY and GARCH models. Areas of momentum are indicated by ellipses. This figure is available in color online at www.interscience.wiley.com/journal/jae

Download figure to PowerPoint

The figure shows the GARCH model always slowly mean reverting back to its long-term average. It also shows from the start of September a sequence of upward moves in volatility, caused by the slow adjustment of the GARCH model.

The HEAVY model has a rather different profile. This is most clearly seen by the highest volatility point, where the multistep-ahead forecast shows momentum. This is highlighted by displaying an ellipse. The model expected volatility to increase even further than we had already seen in the data. Another feature that is interesting is that the HEAVY model has, in the first half of the data sample, much higher levels of volatility. After the end of October volatility falls, with the HEAVY model indicating very fast falls suggesting a lull in volatility during November 2008, before it kicks back up in December before falling to around 45% for the remaining 3 months of the data. GARCH models do not see this lull; instead, from half way through October until the end of December the GARCH model shows historically very high levels of volatility with a slow decline.

Overall the main impressions are the slow and steady adjustments of the GARCH model and the more rapid movements implied by the HEAVY model. There is some evidence that GARCH was behind the curve during the peak of the financial crisis, while HEAVY models rapidly adjust. Likewise, it looks as though GARCH's volatility was too high during late December and early January, as the model could not allow the conditional variance to fall rapidly enough. The momentum effects of the HEAVY model are not very large in these figures but they do have an impact. Basically local trends are followed through before mean reversion overcomes them.

More dramatic momentum effects can be seen from the Swiss franc case, which is the most extreme example of momentum we have seen in our empirical work. For the HEAVY model β is much higher than is typical for equities, being around 0.95. The result is some interesting arcs which appear in the volatility hedgehog plot given in Figure 5. The evidence in Table III is that the HEAVY model is a better fit than for GARCH models but the difference is very modest for exchange rates in the library, while for other assets it is quite substantial.

thumbnail image

Figure 5. Extreme case of momentum. Volatility hedgehog plot for annualised volatility for the Swiss franc against the US dollar. The hedgehog plots are given for both HEAVY and GARCH models. This figure is available in color online at www.interscience.wiley.com/journal/jae

Download figure to PowerPoint

5. EXTENSIONS

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information

5.1. Statistical Leverage Effect

We can parametrically model statistical leverage effects, where falls in asset prices are associated with increases in future volatility, by adding a new equation for a realised semivariance (equation image). Realised semivariances (sums of squared negative returns) were introduced by Barndorff-Nielsen et al. (2009b) and further emphasised in empirical work by Patton and Sheppard (2009b). Now our model becomes

  • equation image

The expansion of the model to allow for the appearance of realised semivariances raises no new issues (allowing lags of equation image to appear in the dynamic of RMt could potentially help too, but we will not discuss that here).

The paper by Engle and Gallo (2006) suggests an alternative approach. Let it = 1math image, then they extend models by interacting it with volatility measures, following the tradition of the GARCH literature. If one does this to the HEAVY model it becomes

  • equation image(24)

This model is easy to estimate, for it−1 is in equation image. However, to make two-step-ahead forecasts we run into trouble as we do not know itRMt or have a forecast of it.

One approach to this is to assume that

  • equation image

where A⟂⟂B denotes A and B are statistically independent. This would imply

  • equation image

Typically we would assume that equation image, which is likely to be very close to 1/2. This would allow multistep-ahead forecasts to be computed analytically and straightforwardly.

Perhaps, more wisely, we could use a bootstrap to simulate the empirical distribution of equation image from (21) and this allows simulation through (24). This method of dealing with statistical leverage has the virtue that it also delivers an estimator of the multistep-ahead prediction distribution, and so may reveal the long left-hand tail of the asset prices often induced by statistical leverage even though equation image is marginally relatively symmetric.

5.2. A Semiparametric Model for Fζ, η

The joint distribution of the innovations Fζ, η can be approximated by the joint empirical distribution function, which can be used inside a bootstrap procedure.

We could impose a model on the joint distribution via the following simple structure. Let ηtFη and

  • equation image

This is a nonparametric location-scale mixture.13 Now equation image and so we can estimate the distribution functions Fη and Fε by their univariate empirical distribution functions, having estimated β by using the fact that under this model cov(ζt, ηt) = β.

5.3. Extending HEAVY-r

In some cases where the realised measure is inadequate it may be better to extend the HEAVY-r model to allow a GARCHX structure. The HEAVY model then becomes

  • equation image

It is then straightforward to see that equation image has an ARMA(2,2) representation with autoregressive roots αR + βR and β+ γ. The moving average roots are not changed by having γ> 0. Thus this extension has more momentum than the standard HEAVY model.

The derivation of this result is as follows:

  • equation image

where L is the lag operator. Likewise:

  • equation image

Combining delivers the result. In particular:

  • equation image

Thus

  • equation image

6. CONCLUSIONS

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information

In this paper we have given a self-contained and sustained analysis of a particular model of conditional volatility based on high-frequency data. HEAVY models are relatively easy to estimate and have both momentum and mean reversion. We show that these models are more robust to level breaks in the volatility than conventional GARCH models, adjusting to the new level much faster. Further, as well as showing mean reversion, HEAVY models exhibit momentum, a feature which is missing from traditional models.

. APPENDIX: DATA CLEANING

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information

The Realised Library is based on underlying high-frequency data, which we obtain through Reuters. We are not in a position to make available these base data, or its cleaned version, for commercial reasons, as Reuters owns the copyright to it. Although the raw data are of high quality, they do need to be cleaned so they are suitable for econometric inference. Cleaning is an important aspect of computing realised measures. Although realised kernels are somewhat robust to noise, experience suggests that when there are mis-recordings of prices or large amounts of turbulence are encountered at the start of a trading day then they may sometimes give false signals. Barndorff-Nielsen et al. (2009a) have studied systematically the effect of cleaning on realised kernels, using cleaning methods which build on those documented by Falkenberry (2002) and Brownlees and Gallo (2006). Our data have more variation in structure than those dealt with in Barndorff-Nielsen et al. (2009a) and so we discuss how our methods use their rules.

Most of the datasets we use are based on indexes, which are updated at distinct frequencies. Some indexes, such as the DAX and Dow Jones Index, are updated every second or couple of seconds. Most are updated every 15 or 60 seconds. The only data cleaning we applied to this was that applied to all datasets, called P1, given below.

. All Data

  • P1.
    Delete entries with a timestamp outside the interval when the exchange is open.

Quote data for the exchange rates are very plentiful and have the virtue of having no market closures. We use four rules for this, given below as Q1–Q4. Q1 is by far the most commonly used.

. Quote Data Only

  • Q1.
    When multiple quotes have the same timestamp, we replace all these with a single entry with the median bid and median ask price.
  • Q2.
    Delete entries for which the spread is negative.
  • Q3.
    Delete entries for which the spread is more than 50 times the median spread on that day.
  • Q4.
    Delete entries for which the mid-quote deviated by more than 10 mean absolute deviations from a rolling median centred but excluding the observation under consideration of 50 observations (i.e. 25 observations before and 25 after).

In addition, we have made various manual edits in the library when the results were unsatisfactory. Some of these were due to rebasing of indexes, which had their biggest effects on daily returns. It is the hope of the editors of the library that as it develops then the degree of manual edits will decline.

ACKNOWLEDGMENTS

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information

We thank Tim Bollerslev, Frank Diebold, Rob Engle, Nathaniel Frank, Giampiero Gallo, Eric Ghysels, Andrew Patton, Natalia Sizova, Jun Yu and two referees for various helpful suggestions. We are responsible for any errors.

REFERENCES

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information
  • Andersen TG, Bollerslev T, Diebold FX, Labys P. 2001a. The distribution of exchange rate volatility. Journal of the American Statistical Association 96: 4255. Correction published 2003, vol. 98, p. 501.
  • Andersen TG, Bollerslev T, Diebold FX, Ebens H. 2001b. The distribution of realized stock return volatility. Journal of Financial Economics 61: 4376.
  • Andersen TG, Bollerslev T, Diebold FX, Labys P. 2003. Modeling and forecasting realized volatility. Econometrica 71: 579625.
  • Andersen TG, Bollerslev T, Meddahi N. 2006. Market microstructure noise and realized volatility forecasting. Working paper, Department of Economics, Duke University. Forthcoming, Journal of Econometrics.
  • Andersen TG, Bollerslev T, Diebold FX. 2007. Roughing it up: including jump components in the measurement, modeling and forecasting of return volatility. Review of Economics and Statistics 89: 707720.
  • Andersen TG, Bollerslev T, Diebold FX. 2009. Parametric and nonparametric measurement of volatility. In Handbook of Financial Econometrics, Aït-SahaliaY, HansenLP (eds). North-Holland: Amsterdam 67137.
  • Andrews DWK. 1991. Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica 59: 817858.
  • Bandi FM, Russell JR. 2006. Separating microstructure noise from volatility. Journal of Financial Economics 79: 655692.
  • Bandi FM, Russell JR. 2008. Microstructure noise, realized variance, and optimal sampling. Review of Economic Studies 75: 339369.
    Direct Link:
  • Barndorff-Nielsen OE, Shephard N. 2002. Econometric analysis of realised volatility and its use in estimating stochastic volatility models. Journal of the Royal Statistical Society, Series B 64: 253280.
  • Barndorff-Nielsen OE, Shephard N. 2006. Econometrics of testing for jumps in financial economics using bipower variation. Journal of Financial Econometrics 4: 130.
  • Barndorff-Nielsen OE, Shephard N. 2007. Variation, jumps and high frequency data in financial econometrics. In Advances in Economics and Econometrics: Theory and Applications, Ninth World Congress, BlundellR, PerssonT, NeweyWK (eds). Econometric Society Monographs, Cambridge University Press: Cambridge, UK; 328372.
  • Barndorff-Nielsen OE, Hansen PR, Lunde A, Shephard N. 2008. Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise. Econometrica 76: 14811536.
  • Barndorff-Nielsen OE, Hansen PR, Lunde A, Shephard N. 2009a. Realised kernels in practice: trades and quotes. Econometrics Journal 12: C1C32.
  • Barndorff-Nielsen OE, Kinnebrouck S, Shephard N. 2009b. Measuring downside risk: realised semivariance. In Volatility and Time Series Econometrics: Essays in Honor of Robert F. Engle, BollerslevT, RussellJ, WatsonM (eds). Oxford University Press: Oxford; 117136.
  • Bollerslev T. 1986. Generalised autoregressive conditional heteroskedasticity. Journal of Econometrics 51: 307327.
  • Bollerslev T, Wooldridge JM. 1992. Quasi maximum likelihood estimation and inference in dynamic models with time varying covariances. Econometric Reviews 11: 143172.
  • Bollerslev T, Kretschmer U, Pigorsch C, Tauchen G. 2009. A discrete-time model for daily S&P 500 returns and realized variations: jumps and leverage effects. Journal of Econometrics 150: 151166.
  • Brownlees CT, Gallo GM. 2006. Financial econometrics at ultra-high frequency: data handling concerns. Computational Statistics and Data Analysis 51: 22322245.
  • Brownlees CT, Gallo GM. 2009. Comparison of volatility measures: a risk management perspective. Journal of Financial Econometrics (forthcoming).
  • Christensen K, Podolskij M. 2007. Asymptotic theory for range-based estimation of integrated volatility of a continuous semi-martingale. Journal of Econometrics 141: 323349.
  • Cipollini F, Engle RF, Gallo G. 2007. A model for multivariate non-negative valued processes in financial econometrics. Working paper, Stern School of Business, New York University.
  • Clements MP, Hendry DF. 1999. Forecasting Non-stationary Economic Time Series: The Zeuthen Lectures on Economic Forecasting. MIT Press: Cambridge, MA.
  • Corradi V, Distaso W. 2006. Semiparametric comparison of stochastic volatility models using realized measures. Review of Economic Studies 73: 635667.
    Direct Link:
  • Corsi F. 2009. A simple long memory model of realized volatility. Journal of Financial Econometrics 7: 174196.
  • Cox DR. 1961a. Prediction by exponentially weighted moving averages and related methods. Journal of the Royal Statistical Society, Series B 23: 414422.
  • Cox DR. 1961b. Tests of separate families of hypotheses. Proceedings of the Berkeley Symposium 4: 105123.
  • Diebold FX, Mariano RS. 1995. Comparing predictive accuracy. Journal of Business and Economic Statistics 13: 253263.
  • Drost FC, Nijman TE. 1993. Temporal aggregation of GARCH processes. Econometrica 61: 909927.
  • Engle RF. 1982. Autoregressive conditional heteroskedasticity with estimates of the variance of the United Kingdom inflation. Econometrica 50: 9871007.
  • Engle RF. 2002. New frontiers for ARCH models. Journal of Applied Econometrics 17: 425446.
  • Engle RF, Gallo JP. 2006. A multiple indicator model for volatility using intra daily data. Journal of Econometrics 131: 327.
  • Engle RF, Lee GGJ. 1999. A permanent and transitory component model of stock return volatility. In Cointegration, Causality, and Forecasting: A Festschrift in Honour of Clive W. J. Granger, EngleRF, WhiteH (eds). Oxford University Press: Oxford; 475497.
  • Engle RF, Mezrich J. 1996. GARCH for groups. Risk: 9(8): 3640.
  • Engle RF, Hendry DF, Richard JF. 1983. Exogeneity. Econometrica 51: 277304.
  • Falkenberry TN. 2002. High frequency data filtering. Technical report, Tick Data.
  • Fan J, Wang Y. 2007. Multi-scale jump and volatility analysis for high-frequency financial data. Journal of the American Statistical Association 102: 13491362.
  • Ghysels E, Rubia A, Valkanov R. 2009. Multi-period forecasts of volatility: direct, iterated and mixed-data approaches. Working paper, Department of Economics, UNC at Chapel Hill.
  • Giacomini R, White H. 2006. Tests of conditional predictive ability. Econometrica 74: 15451578.
  • Giot P, Laurent S. 2004. Modelling daily value-at-risk using realized volatility and ARCH type models. Journal of Empirical Finance 11: 379398.
  • Golub GH, Van Loan CF. 1989. Matrix Computations. Johns Hopkins University Press: Baltimore, MD.
  • Hansen PR, Lunde A. 2006. Realized variance and market microstructure noise (with discussion). Journal of Business and Economic Statistics 24: 127218.
  • Heber G, Lunde A, Shephard N, Sheppard KK. 2009. OMI's realised library, Version 0.1. Oxford-Man Institute: University of Oxford.
  • Hendry DF. 1995. Dynamic Econometrics. Oxford University Press: Oxford.
  • Jacod J, Li Y, Mykland PA, Podolskij M, Vetter M. 2009. Microstructure noise in the continuous case: the pre-averaging approach. Stochastic Processes and Their Applications 119: 22492276.
  • Lu Y. 2005. Modeling and forecasting daily stock return volatility with intra-day price fluctuations information. PhD thesis, Department of Economics, University of Pennsylvania.
  • Maheu JM, McCurdy TH. 2009. Do high-frequency measures of volatility improve forecasts of return distributions? Working paper, Department of Economics, Toronto University.
  • Marcellino M, Stock JH, Watson MW. 2006. A comparison of direct and iterated multistep AR methods for forecasting macroeconomic time series. Journal of Econometrics 135: 499526.
  • Patton AJ. 2009. Volatility forecast evaluation and comparison using imperfect volatility proxies. Journal of Econometrics (forthcoming).
  • Patton AJ, Sheppard KK. 2009a. Evaluating volatility forecasts. In Handbook of Financial Time Series, AndersenTG, DavisRA, KreissJP, MikoschT (eds). Springer: Berlin; 801838.
  • Patton AJ, Sheppard KK. 2009b. Good volatility, bad volatility: signed jumps and persistence of volatility. Working paper, Oxford-Man Institute, University of Oxford.
  • Rivers D, Vuong QH. 2002. Model selection for nonlinear dynamic models. Econometrics Journal 5: 139.
  • Vuong QH. 1989. Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica 57: 307333.
  • Zhang L. 2006. Efficient estimation of stochastic volatility using noisy observations: a multi-scale approach. Bernoulli 12: 10191043.
  • Zhang L, Mykland PA, Aït-Sahalia Y. 2005. A tale of two time scales: determining integrated volatility with noisy high-frequency data. Journal of the American Statistical Association 100: 13941411.
  • 1

    See also the work by Bandi and Russell (2006, 2008), Andersen et al. (2006), Hansen and Lunde (2006), Corradi and Distaso (2006) and Christensen and Podolskij (2007).

  • 2

    See also the important work of Fan and Wang (2007) on the use of wavelets in this context.

  • 3

    We could also have included jump robust measures, which typically lead to an increase in predictive power. See, for example, Andersen et al. (2007) and Barndorff-Nielsen and Shephard (2006). This has virtues but then we would also need to forecast these terms in making multistep-ahead forecasts. See the work of Engle and Gallo (2006) in this context.

  • 4

    Of course, the most basic realised measure is the squared daily return, so in some sense the GARCH model is a HEAVY-r model. This point was made to us by Frank Diebold. From this point of view one might think that a HEAVY-r model is a ‘turbo-charged’ GARCH. Another interpretation of HEAVY models is that one could unravel equation image in terms of many lags of RM, which relates it directly back in some sense to the forecasting models considered by Andersen, Bollerslev and Diebold in various papers in which they focused on forecasting RM using lags of RM.

  • 5

    A stronger set of assumptions, which is useful in inspiring a quasi-likelihood, is that jointly (εt, ηt)∼ i.i.d., over the subscript t. We will not make the latter assumption unless we explicitly say so.

  • 6

    There may be advantages in truncating the estimator of κ to insist it is weakly less than one but we have not done that in this paper.

  • 7

    In the context of forecasting this is related to Diebold and Mariano (1995). Vuong (1989) has the virtue of being valid even if neither model is correct. It just assesses which is better in terms of the unconditional average likelihood ratio.

  • 8

    If we condition on the lagged realised measure the additional memory in the HEAVY-r model is modest.

  • 9

    We work with the equation image, rather than the original RMt as volatilities (as opposed to variance type objects) are easier to interpret later, but this choice has little impact here and the same exercise could be carried out based on the RMt.

  • 10

    There may be some advantages in using a block sampling scheme for the innovations (ζt, ηt) as they are not expected to be exactly temporally independent, although they should be temporally uncorrelated. However, we have not explored that here.

  • 11
  • 12

    For our MSCI index data we only have raw returns at the 1-minute level, which meant that when we subsampled at the 30-second level we produce the same RV twice (this has no impact as we divide everything by two).

  • 13

    If the parametric assumption that Fη was a generalised inverse Gaussian distribution and Fε was Gaussian, then the resulting distribution for ζt would be the well-known generalised hyperbolic distribution.

Supporting Information

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. HEAVY MODELS
  5. 3. OMI'S REALISED LIBRARY 0.1
  6. 4. EMPIRICAL ANALYSIS WITH A LARGE CROSS-SECTION
  7. 5. EXTENSIONS
  8. 6. CONCLUSIONS
  9. . APPENDIX: DATA CLEANING
  10. ACKNOWLEDGMENTS
  11. REFERENCES
  12. Supporting Information

The JAE Data Archive directory is available at http://qed.econ.queensu.ca/jae/datasets/shephard001/ .

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.