SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. 1 INTRODUCTION
  4. 2 BACK TO BASICS
  5. 3 SPREADING THE RISK
  6. 4 CONCLUSION
  7. Acknowledgements
  8. REFERENCES

Investor risk is a complicated concept in practice and is not well captured by measures of volatility as is well understood by uncertainty theory. Rather than asking statisticians to attempt to measure risk, it may be better to listen to decision theorists, but their suggestions are not very practical. Diversification is clearly helpful in reducing risk but the risk level of one portfolio cannot be measured without knowing the risks of other major portfolios. A meta-analysis can be used to compare alternative volatility measures in terms of their forecasting utility. Copyright © 2002 John Wiley & Sons, Ltd.


1 INTRODUCTION

  1. Top of page
  2. Abstract
  3. 1 INTRODUCTION
  4. 2 BACK TO BASICS
  5. 3 SPREADING THE RISK
  6. 4 CONCLUSION
  7. Acknowledgements
  8. REFERENCES

My invitation to contribute to a conference on the topic of volatility included the suggestion that I should make my remarks ‘wide-ranging.’ However, on thinking about volatility I quickly realized that the topic was well worked over and I was unlikely to be able to make a worthwhile contribution. So instead, I decided to write about something quite different, that is, risk. On reading about risk it soon became clear to me that this was a very confused concept. There are, of course, many types of risk but there is still disagreement even when concentrating on just the risks faced by investors. The reason for this confusion may lie in the obvious fact that there are many types of people and researchers interested in the stock market. A brief list, roughly increasing in practicality, would include:

  • (1)
    Continuous-time mathematicians (option pricing theory).
  • (2)
    Uncertainty-theory economists (portfolio theory, diversification, CAPM).
  • (3)
    Econometricians (ARCH).
  • (4)
    Empirical statisticians (random walk, efficient market theory).
  • (5)
    Journalists (rationalization of stock price movements)
  • (6)
    Mutual funds/brokers' analysis—‘professionals’ (buy low, sell high)
  • (7)
    Individual investors (include housing in portfolio)

(I have included some of the major discoveries associated with each group.)

Some of these groups could easily be further sub-divided and other groups added, such as financial engineers, financial physicists, financial biologists, and, who knows, financial theologians? Each group is inclined to have its own distinct attitude towards risk and to rather ignore the suggestions or requirements of other groups. It is well known that risk is a highly personal matter, the old dislike risk more than the young, women dislike it more than men, for example (see Financial Times, 7 April 2001, page xxiv). No simple definition or quantification will satisfy everyone.

It is convenient to start the discussion by considering a simple theory of risk proposed by R. Duncan Luce (1980, correction in 1981). If a return has density f Luce is concerned with finding an associated risk measure R(f). Let α > 0 represent a change of scale, so that x[RIGHTWARDS ARROW]xα with density

  • equation image(1)

Two assumptions are made, the first is multiplicative:

  • Assumption 1:

    • equation image(2)

    where S() is some increasing function with S(1) = 1; and

  • Assumption 2: there is a non-negative function T, with T(0) = 0, such that for all density functions f,

    • equation image(3)

    It is shown in the correction that, using just these simple assumptions, it follows that

    • equation image(4)

    with A1, A2 both ≥0, and some θ > 0, where r is now a return or a mean adjusted return. It should be noted that the risk measure starts as a function of the whole distribution f, but ends with separate emphasis on each half-distribution.

Initially considering the symmetric case A1 = A2, one has the class of volatility measures

  • equation image(5)

which includes the variance, θ = 2, and the mean absolute deviation, θ = 1. In some previous studies (e.g. Ding, Granger and Engle, 1993; and Granger, Ding and Spear, 2000) the time series and distributional properties of these measures have been studied empirically and the absolute deviations found to have particular properties such as the longest memory.

It might be worth considering variance and mean absolute measures as alternatives given that returns from the stock market are known to come from a distribution with particularly long tails. Because the variance of a variance gets into the fourth moments of returns, which will be very unstable, and the variance of absolute returns is just the variance of a return, and thus be more stable, an argument can be made for absolute returns being preferable. A rather different argument comes from an old area of statistical literature. Suppose that one had the unlikely situation that a regression

  • equation image(6)

is being considered and that the kurtosis of εt is, somehow, known. A statistician decided to use an Lp norm; that is, to minimize E[|εt|p] for some p > 0 to perform the regression, what value of p is most appropriate? The equation image is asymptotically normal with zero mean and a covariance matrix depending on p; so that p can be chosen to minimize this covariance matrix. A trio of papers (Nyquist, 1983; Money et al; 1982; Harter, 1977) find that the optimum p-values are p = 1 for the Laplace and Cauchy distribution for ε, p = 2 if ε is Gaussian, and p = ∞ (the min/max estimator) if ε has a rectangular distribution. Harter suggests the following approximate value based on the kurtosis k of ε:

  • equation image

It is rarely the case that the kurtosis is known when conducting an empirical exercise, but in finance the kurtosis of residuals can safely be thought of as being over 4, and so the L1 norm, corresponding to absolute returns, is preferred. This suggests that a regression for CAPM, for example, should use this L1 norm to get superior results.

The realization that the distribution of returns is very long tailed, and consequently not Gaussian, has been with us now for many years. It is worth being reminded that when the British and Dutch sent ships to the East Indies in the seventeenth century looking for spices such as pepper and nutmeg, roughly 50% of the ships did not return. Of those that did, half were unsuccessful in their search for spices. However, the successful trips were enormously profitable, particularly during periods when the Black Death was operative and nutmeg was thought to be the only successful cure available. This example of non-Gaussian returns is discussed in Milton (1999).

However, if one goes to some fairly old literature by the uncertainty economists, summarized in Machina and Rothschild (1987), and Levy (1992), they firmly reject use of all Vθ measures as useful quantifications of risk as they do not correspond to satisfactory utility functions. Levy (1992) states ‘while the variance provides some information it cannot serve as index for the risk involved for all utility functions’ (page 568). There are three specific cases where variance can serve as a risk index, if utility is quadratic or if the return distribution is normal or log-normal. It is curious why one group holds such firm views and another group, the econometricians, appear to ignore it. The uncertainty theorists would argue that one does not need a quantitative, that is, a cardinal, measure of risk as an ordinal measure will give you all that is needed. You do not need to say, risk of A is 10, risk of B is 8; you just want to know that A is riskier than B when making a decision. The obvious response is that you cannot perform Markowitz-style optimum portfolio selection techniques without a variance or similar measure, but many years ago Bawa (1975, 1978) showed how, at least in principle, stochastic dominance techniques can be used to rank portfolios. It is much less simple to use but is fairly straightforward if the estimation of Value-at-Risk is extended to the whole distribution. Conceptually, at least this is not difficult.

A major tool designed by the uncertainty theory economists is the mean preserving spread in which the original distribution of returns is manipulated to form a new distribution with the same mean but an increased risk. Two movements can take place below the mean, one ‘chunk’ can be moved from mid-lower distribution and placed nearer to the mean, and another chunk similarly moved from the mid-lower parts but towards the tail of the distribution. If balanced correctly, the mean will not change but the extra weight in the tail will increase risk. Any risk-adverse investor would prefer the original shape compared to the reshaped distribution after imposing the mean-preserving spread. However, it is interesting to note that the volatility measure (1.5), derived from the Luce results, has the property that it is unchanged by a one-sided mean preserving spread (MPS) if θ = 1 (corresponding to mean absolute difference), the measure is increased if θ > 1 (including variance) but is decreased if θ < 1. Thus these measures do not correspond to MPSs if θ ≤ 1. The same argument applies to the two-sided Luce result (4).

Statisticians and econometricians behave as though volatility, particularly variance, equates with risks yet uncertainty theorists reject this identity.

There is reason to believe that actual investors, both the professionals and others, do not equate risk with volatility. These investors have different attitudes towards the upper and lower parts of the distribution. Investors will agree that there is uncertainty in the upper part of the distribution but risk only occurs in the lower part. Portfolios are selected to reduce risk in the lower tail, but not uncertainty in the upper tail. The investor does not diversify to reduce the chance of an unexpected large positive return, only that of a large negative one. I have seen many studies of Value-at-Risk based on the lower tail but none so far on the upper tail. It is interesting that Luce, in his 1982 correction, produces a measure of risk with different weights for the two halves of the distribution. Markowitz (1959) suggested the use of a semi-variance at a very early stage. A generalization was proposed by Fishburn (1977) defined as

  • equation image

where F(r) is the distribution of returns and t is some target. It is related to a more recent risk measure called ‘expected shortfall,’ which appears to have superior properties to Value-at-Risk (VaR) according to Yamai and Yoshiba (2001). VaRα at the 100 (1 − α)% confidence level is defined as the lower 100α percentile of the return distribution. The expected shortfall is defined as R1(−VaRα)/(1 − α) + VaRα. (Standard errors for a VaR can be estimated using standard binomial or Poisson distribution theory, or using a bootstrap method.) The early arguments that such techniques were too difficult to use for portfolio construct no longer hold with our modern computer techniques. It is not clear how the centre of the distribution should be defined, as zero, or the mean return, or the zero-risk interest return. The mean return is probably easiest to justify but the choice is not critical.

Decreasing the content of the upper part of the distributions reduces uncertainty rather than risk. This may be of importance if assets are being ‘sold short’ or with various types of derivatives.

Rather than just stay with mean and variance, one could move to inclusion and consideration of higher moments, such as measures of skewness and kurtosis. This has already been considered in the literature, e.g. El Bab Siri and Zakoian (2001).

The topics considered in this section have largely considered risk from the eyes of statisticians or theoretical economists rather than from the perspective of actual investors. The approaches taken are, possibly, too academic to be helpful, it is time to turn to possibly more realistic topics.

2 BACK TO BASICS

  1. Top of page
  2. Abstract
  3. 1 INTRODUCTION
  4. 2 BACK TO BASICS
  5. 3 SPREADING THE RISK
  6. 4 CONCLUSION
  7. Acknowledgements
  8. REFERENCES

There are many kinds of risk such as inflation risk, bad-debt risk, insurance risk, exchange rate risk, corporate risk of various kinds; and so forth. The object of interest here, however, is the risk faced by an investor. A financial asset is not purchased for the purpose of consumption, like a bag of apples, nor is it used to generate aesthetic pleasure, such as a painting, but rather for the production of a flow of dividends and, hopefully, for a profit when it is eventually sold. Although the dividend stream can be important it will not be discussed here. In this simple situation, an asset is purchased at price Pt at time t and sold at price Pt+h at time t + h, giving return rt, h = (Pt+hPt)/Pt. At time t both Pt+h and also rt, h are uncertain variables that will occur in the future and are thus best considered as random variables. It is this uncertainty that produces the risk, as the selling price, or the return, may be considered unsatisfactory by the investor. An unsatisfactory rate of return could be a negative one or below the return on some government bond, for example. However, risk is not merely the possibility of obtaining a small return, it is linked with the entire shape of the distribution of returns, especially at the lower end. It is very difficult to pin down the precise meaning of risk, one could say that risk is in the eye of the investor and its meaning clearly varies from one person to another. It is unlikely that a simple statistical measure could adequately capture such a slippery concept, one may have to turn to other social scientists for advice.

Possibly the least controversial definition of risk is that well-known Arrow–Pratt risk premium as discussed by Levy (1992). For an investor with utility function u() and an initial wealth W, an extra earned quantity X, which is a random variable, then the risk premium π is given by solving the equation

  • equation image(7)

Of course, this is hardly a practical proposition as the utility function and W are rarely known or publically available. What is interesting is that the whole distribution of the extra income, or the return, is usually required and some summary statistic, such as a volatility measure, is insufficient to solve (1) unless a particularly simplistic utility function is concerned.

Where does an ordinary, beginning investor obtain data about volatility, if he or she is to use such numbers as a measure of risk? I am sure that somewhere on the Internet there is a listing of all the NYSE and NASDAQ shares with a variety of alternative volatilities attached, updated frequently. However, the less serious investor would go to the standard source for financial data, the better newspapers in each country, The Wall Street Journal, the Financial Times, or other specialized financial publications. However, although these newspapers have many prices, they have no indication of volatility except possibly daily price range. On the other hand, they all have a column marked ‘vol’ but this is for daily (or weekly) volume rather for volatility. Given its ready availability, until recently volume has been the missing variable in finance, compared to in micro theory, for example. Oscar Morgenstern and I, in our 1970 book, did some analysis but papers since then have been fairly scarce compared to papers using prices or returns. It is unclear why; volume is readily available and could be useful for forecasting volatility.

A recent visitor at the University of California, San Diego, Dr Minxian Yang from the University of New South Wales, did run some evaluations for a couple of London stocks using daily data. He found, of course, that returns were very non-normal with excess kurtosis in one case of 11.4, returns divided by the square root of volume to be somewhat nearer normal, with an excess kurtosis of 5.2, but returns divided by the square root of trades were even closer with an excess kurtosis of 2.7, but still not normal. However, log volume had a skewness of 0.13 and an excess kurtosis of −0.03 and according to these statistics could be well approximated by a normal distribution. It was also found that the correlation between the absolute return and log volume was 0.30 and between absolute return and the square root of trades was 0.44. These values are high enough to suggest that volume, or perhaps daily trades, would be interesting explanatory variables to include in a model for volatility, particularly as both of these variables appear to be somewhat forecastable.

Daily volume, or log volume, is highly autocorrelated and so is forecastable from their own past. It has been found that measures of volatility, such as GARCH and absolute returns, are also forecastable, in part due to the volatility clustering often noted in plots of these quantities. The question of whether or not volatility is forecastable is, I believe, clear but this leads to at least two other important questions: why is it forecastable and which forecasting method works best? There are markets for some forms of volatility, although they seem to be not well developed and are clearly not efficient, or volatility would not be so forecastable. Possibly volatility is not considered important enough, or that substantial use is made of an efficient futures market?

Turning to the question of which method works best, this has recently been considered by Poon and Granger (‘Forecasting financial market volatility: a review’, unpublished manuscript, 2001), who surveyed over 80 empirical papers in this area, but eventually concentrated on about 40 papers that compared alternative methods in terms of their forecasting ability. A problem with such a meta-analysis is that there is considerable variation in the approach taken. Only subsets of possible techniques are used and different types of speculative prices are considered (largely stock prices but some exchange rates and some commodity prices), different horizons were involved from days to months, and different measures of forecast quality used including mean squared error (MSE), mean absolute deviation from mean, and many others. Although available, rarely are significance tests applied to these measures; statements merely are made that one method produces a lower value for MSE, say, than another. Another common problem is that it is unclear exactly what is being forecast. As stated earlier, volatility is not directly observed or publically recorded, so a quantity such as past monthly variance, based on daily prices, can be used. Such statistics are easily calculated and become appropriate objectives for a forecasting competition. They can also be used as a time series to construct simple forecasting models, such as an autoregression of low order or exponential weighted average. In this study the types of forecasts considered were labelled: GARCH (a class that contained many alternative formulations; HISTORICAL (again taking several forms) based on direct observations of the variance of returns series); IMPLIED (based on the Black–Scholes option implied volatility theory); and STOCHASTIC VOLATILITY (or SV, based on this alternative class of models). For the papers making direct comparisons one finds:

  • (1)
    Five papers find that GARCH beats HISTORICAL;
  • (2)
    Five papers find that HISTORICAL beats GARCH;
  • (3)
    Only three papers consider SV forecasts; one finds SV better than GARCH, one find GARCH better than SV, and a third paper finds SV better than GARCH for stocks but the reverse for currencies.
  • (4)
    Thirteen papers compare IMPLIED with HISTORICAL, with twelve preferring IMPLIED.
  • (5)
    Fourteen papers compare IMPLIED with GARCH; all but one find that IMPLIED provides better forecasts. One of the papers also finds that IMPLIED performs better than SV.
  • (6)
    Only a few papers consider combinations of forecasts and generally find that the combination outperforms it components.

Overall, IMPLIED seems to be the superior technique with GARCH and HISTORICAL roughly equal second. The result is not really surprising as the IMPLIED forecasts are based on a wider information set than the alternatives, not just depending on past returns but also using option prices. On the other hand, suitable options may not always be available and so these forecasts cannot be used on many occasions.

The quantitative results from this meta-analysis should be treated with care as the quality of the studies is not considered. There are various possible biases in the meta-analysis as many of the comparative studies are written by researchers who hope for a certain outcome, and may not have submitted an empirical paper with the ‘wrong’ conclusion. The papers that get into print are not necessarily a random sample of research projects that are started. Greater use of statistical tests comparing criteria and the use of combinations could make such studies more interesting.

A weakness of these empirical studies is that they compare techniques for individual assets, but since Markowitz (1959) considered linear portfolios and the development of the capital asset pricing model (CAPM) by Sharpe (1964) and Lintner (1965) the realization is that risks are connected between assets and that this is important. The next section turns to aspects of this topic.

3 SPREADING THE RISK

  1. Top of page
  2. Abstract
  3. 1 INTRODUCTION
  4. 2 BACK TO BASICS
  5. 3 SPREADING THE RISK
  6. 4 CONCLUSION
  7. Acknowledgements
  8. REFERENCES

In the basic mean variance theory using linear portfolios the benefits of diversification are well understood. In a particularly simple case one can start with a group of assets all with the same mean return, m say, and also perhaps the same variance, for purposes of illustration, then a simple portfolio consisting of several assets will have the same mean m but a reduced variance. If one equates risk with variance, this provides a portfolio that is more desirable than any of the individual assets, in this simple example. The benefits of diversification are obvious from this type of example, but are there corresponding costs? If we concentrate on a symmetric distribution, then a decline in dispersion will result in less weight in both tails, thus both less risk and also less large positive returns, which has previously been criticized as a problem. This is difficult however dispersion is measured, provided it involves both tails equally, and so is not a criticism of variance, and the assumption of symmetry is not required provided that both tails have some content.

To provide a simple numerical example; starting with m assets each with zero mean return, variance 1 and covariance c between any pair of returns, then an equally weighted portfolio has mean zero and variance c + (1 − c)/m. If m is fairly large this variance will be dominated by the value of c. For example, if c = 0.36, m = 20, the 99% interval is −2.6 to +2.6 for an individual share but only −1.63 to +1.63 for the portfolio. The portfolio reduces the probability of a large loss but also of a large gain.

A vital step in the understanding of risk was the introduction of the concepts of diversifiable and non-diversifiable risk, the latter being the risk remaining after some kind of portfolio has been formed. Within the linear framework these concepts are easily described by using the capital asset pricing model which takes the form

  • equation image

where rj, t is the jth asset, mt is the market return, aj, bj are parameters and ejt is a residual. bj measures the extent to which the (variances) risk cannot be diversified. If one moves to a non-Gaussian situation, as appropriate here, it is no longer clear that correlation is the correct measure of dependence and so probably non-linear forms of the CAPM should be considered. An obvious generalization is to use a piecewise linear formulation which will approximate most smooth non-linear relationships. At the very least, different coefficients could be used for market return above and below its mean, thus partially capturing the difference between the upper and lower tails of the distribution.

At various stages of the arguments used in the previous sections there is an unstated assumption about coefficients or shapes remaining essentially unchanged through time. An estimated mean may be taken to be a good forecast of future values and similarly with an estimated variance or the coefficients in an estimated ARCH model or a risk premium, however defined. More particularly, if a distribution is selected for returns, such as the normal, t or generalized hypergeometric, for example, it is assumed that the same distribution will continue in the future although with changing parameters. If estimated parameters and assumed distributions are taken to be persistent it is no big leap to assume that an estimated distribution will persist. For example, if the historical record of returns forms a portfolio and has a histogram that is clearly non-symmetric, it is presumably acceptable to believe that future returns will be drawn from a non-symmetric distribution. It follows that if one can find individual assets or small portfolios with particularly interestingly shaped distributions then there is a positive probability that they will be persistent and can be used to construct portfolios having distributions with desirable shapes. This possibility is discussed further below.

Almost the reverse of persistence is the familiar idea of ‘regression to the mean’. If some object, such as an asset, appears to be different from the majority then as time passes it will lose its individuality and revert to having similar properties to the majority. The distribution of the returns for all assets will be the mixture of the distributions for all assets, but these distributions need not all be the same shape, possibly differing only in mean and volatility. It is vital that we get away from thinking about mixtures of normal distributions, or even of bell-shaped (BS) distributions in the area of finance, as these assumptions are simplistic and give distributions with wrong tail properties. It is important to discover evidence of empirical distributions being either mean-reverting or not.

When moving from the distribution of returns of a single asset to those from a (linear) portfolio the central limit theorem cannot be called upon to produce either the normal or a BS distribution, as the contributing assets are not independent and relationships may not be fully explained by the use of just correlations, as indicated by the recent interest in cupolas (see Embrechts, McNeil and Straumann, 1994).

If one was confident about the persistence of distributional shapes over time, it may be possible to ‘sculpt’ distributions from one shape to a preferred one by the use of shifts like mean preserving spreads. These are best done on one side of the mean returns. In theory, the mean return would be unchanged but down-side risk be reduced by correct portfolio selection of returns having distributions having distinctive properties, such as sharp skewness. A more straightforward way to form such mean-preserving spreads is by using put and call options, but these will have a direct cost and thus will not be mean preserving unless one can find investors who are risk-seeking or risk-neutral rather than risk-averse. Given the huge size of the gambling industry in the USA and many other countries, consisting almost entirely of unfair games, it would seem to be a matter of packaging to encourage gamblers to take risky positions on the stock market.

The usual discussion of risk, as above, is about a single asset or for a particular portfolio. However, it is very unclear if the risk level for a funds portfolio can be considered without considering the contents of the portfolios of other major funds. The following quotation states the problem. The eminent economist Irving Fisher wrote: ‘were it true that each individual speculator made up his own mind independently of every other as to the future course of events, the errors of some would probably be offset by those of others. But the mistakes of the common herd are usually in the same direction. Like sheep, they follow a single leader…A chief cause of crises, runs on banks, etc., is that risks are not independently reckoned, but are a matter of imitation’ (Fisher, 1924, quoted in Mallios, 2000). If a VaR or some other risk analysis suggests that a portfolio has too high a risk level and that certain classes of assets should be unloaded, the value of these assets will depend on whether or not other funds' portfolios are attempting to unload the same assets at the same time. The famous hedge fund Long-Term Capital Management, whose final losses were around $4 billion over a five-week period, suffered from exactly this linked risk problem. For a discussion of the collapse of LTCM, see Lowenstein (2001).

A recent paper by Silvapulle and Granger (2001) explores risk relationships in a simple form. Our empirical study uses daily data from the thirty Dow Jones Industrial Stocks for the period 1991–9. The correlation between returns of these stocks were conditional on whether or not these returns to both lay in an outer quantile (say, higher or lower 25%) or in a central region (50%). Generally, it was found that the average correlation between returns was roughly the same in the central region (at about 0.15) as in the upper region (the bull market) but the average correlation was much higher in the lower region (at about 0.35) (the bear market). It follows that for the period where portfolio diversification is most needed it will be less successful than in other times because of the links across assets.

It is generally agreed that one cannot strictly discuss the riskiness of a single asset, but that it should be correctly discussed within the context of a portfolio. The same logic applies to portfolios: one cannot discuss the risk of a portfolio without being concerned with the co-risk with other portfolios. This is particularly a problem when many major portfolios in a market carry the same ‘high-risk’ assets, which may have to be liquidated at the same time. One may need to appoint a regulator for the market to adjudicate the overall risk level of the market and suggest diversification before problems arise.

4 CONCLUSION

  1. Top of page
  2. Abstract
  3. 1 INTRODUCTION
  4. 2 BACK TO BASICS
  5. 3 SPREADING THE RISK
  6. 4 CONCLUSION
  7. Acknowledgements
  8. REFERENCES

Wall Street is a one-way street where there is a highly active market in second-hand paper, being share certificates that virtually no owners ever see. The general investor is somewhat concerned with risk but largely concentrates on returns; the professional investors, those who run market funds, for example, are concerned with both risk and return. Because of the compelling empirical and theoretical results underlying the efficient market theory, academics have largely ignored the question of forecastability of returns in recent decades, and have concentrated on exploring the question of risk. It is possible that this lack of balance has gone too far.

As an illustration one could look at daily returns for the Dow and the NASDAQ indices. Using just visual econometrics, the Dow figures over a twelve-month period ending February 2001 show no obvious drift and any movement in mean is hidden by volatility. The NASDAQ index considered from mid-1998 to the end of 2000 has relatively little volatility but steadily increases in level until March 2000 but declining steadily thereafter. For the NASDAQ plots there have been long periods for which the first-moment movements measuring the centre of the distribution dominate the volatility.

It is useful to remind readers that the efficient market theory says that there cannot exist a trading rule that consistently makes positive profits but a successful rule can exist for some situations at some times for some periods. If I find out how to find such rules, I doubt if I will publish the procedure. Successful traders, many of whom use rules, are also disinclined to discuss the source of their success.

Acknowledgements

  1. Top of page
  2. Abstract
  3. 1 INTRODUCTION
  4. 2 BACK TO BASICS
  5. 3 SPREADING THE RISK
  6. 4 CONCLUSION
  7. Acknowledgements
  8. REFERENCES

I would like to thank the participants in the Conference on Volatility, Perth Australia, September 2001, particularly Rob Engle, for their helpful comments.

REFERENCES

  1. Top of page
  2. Abstract
  3. 1 INTRODUCTION
  4. 2 BACK TO BASICS
  5. 3 SPREADING THE RISK
  6. 4 CONCLUSION
  7. Acknowledgements
  8. REFERENCES
  • Bawa VS. 1975. Optimal rules for ordering uncertain aspects. Journal of Financial Economics 2: 95121.
  • Bawa VS. 1978. Safety first, stochastic dominance, and optimal portfolio choice. Journal of Financial and Quantitative Analysis 13: 255271.
  • Ding Z, Granger CWJ, Engle RF. 1993. A long memory property of stock returns and a new model. Journal of Empirical Finance 75: 185215.
  • El Babsin M, Zakoian J-M. 2001. Contemporaneous asymmetry in GARCH processes. Journal of Econometrics 101: 257294.
  • Embrechts P, McNeil A, Straumann D. 1994. Correlation and dependency in risk management: properties and pitfalls. Working paper, Department of Mathematics, University of Zurich, Switzerland.
  • Fishburn PC. 1977. Mean–risk analysis with risk associated with below-target return. American Economic Review 67: 116126.
  • Fisher I. 1924. Useful and harmful speculation. In Readings in Risk and Risk Taking, HardyC (ed.). University of Chicago Press: Chicago.
  • Granger CWJ, Ding Z, Spear S. 2000. Stylized facts on the temporal and distributional properties of absolute returns: an update. In Statistics and Finance: An Interface. Proceedings of the Hong Kong International Workshop on Statistics in Finance, ChanW-S, et al. (eds). Imperial College Press: London.
  • Granger CWJ, Morgenstern O. 1970. The Predictability of Stock Prices. Heath Lexington: Lexington, MA.
  • Harter HL. 1977. Nonuniqueness of least absolute values regression. Communications in Statistics - Theory and Methods A6: 829838.
  • Levy H. 1992. Stochastic dominance expected utility: survey and analysis. Management Science 38: 555593.
  • Lintner J. 1965. Security prices, risk and maximal gains from diversification. Journal of Finance 20: 587615.
  • Lowenstein R. 2001. When Genius Fails: The Rise and Fall of Long-Term Capital Management. Random House: New York.
  • Luce RD. 1980. Some possible measures of risk. Theory and Decision 12: 217228.
  • Luce RD. 1981. Corrections to ‘Some possible measures of risk’. Theory and Decision 13: 381.
  • Machina M, Rothschild M. 1987. Risk. The New Palgrave Dictionary of Economics, EatwellJ, MillgateM, NewmanP (eds). Macmillan: London; 203205.
  • Mallios WS. 2000. Analysis of Sports Forecasting. Kluwer Publishers: New York.
  • Markowitz H. 1959. Portfolio Selection. John Wiley: New York.
  • Milton G. 1999. Nathaniel's Nutmeg. Hodder and Stoughton: London.
  • Money AH, et al. 1982. The linear regression model: Lp-norm estimation and the choice of p. Communications in Statistics—Simulations and Computations 11: 89109.
  • Nyquist H. 1983. The optimal Lp norm estimator in linear regression models. Communications in Statistics—Theory and Methods 12: 25112524.
  • Sharpe W. 1964. Capital asset prices: a theory of market equilibrium under conditions of risk. Journal of Finance 19: 425442.
  • Silvapulle P, Granger CWJ. 2001. Large returns, conditional correlations, and portfolio diversification. Quantitative Finance (forthcoming).
  • Yamai Y, Yoshiba T. 2001. On the validity of Value-at-Risk. Comparative analysis with expected shortfall. Institute for Monetary and Economic Studies, Bank of Japan Discussion Paper, 54.