This study proposes a modified strike-spread method for hedging barrier options in generalized autoregressive conditional heteroskedasticity (GARCH) models with transaction costs. A simulation study was conducted to investigate the hedging performance of the proposed method in comparison with several well-known static methods for hedging barrier options. An accurate, easy-to-implement and fast scheme for generating the first passage time under the GARCH framework which enhances the accuracy and efficiency of the simulation is also proposed. Simulation results and an empirical study using real data indicate that the proposed approach has a promising performance for hedging barrier options in GARCH models when transaction costs are taken into consideration.

]]>Researchers familiar with spatial models are aware of the challenge of choosing the level of spatial aggregation. Few studies have been published on the investigation of temporal aggregation and its impact on inferences regarding disease outcome in space–time analyses. We perform a case study for modelling individual disease outcomes using several Bayesian hierarchical spatio-temporal models, while taking into account the possible impact of spatial and temporal aggregation. Using longitudinal breast cancer data from South East Queensland, Australia, we consider both parametric and non-parametric formulations for temporal effects at various levels of aggregation. Two temporal smoothness priors are considered separately; each is modelled with fixed effects for the covariates and an intrinsic conditional autoregressive prior for the spatial random effects. Our case study reveals that different model formulations produce considerably different model performances. For this particular dataset, a classical parametric formulation that assumes a linear time trend produces the best fit among the five models considered. Different aggregation levels of temporal random effects were found to have little impact on model goodness-of-fit and estimation of fixed effects.

Effective implementation of likelihood inference in models for high-dimensional data often requires a simplified treatment of nuisance parameters, with these having to be replaced by handy estimates. In addition, the likelihood function may have been simplified by means of a partial specification of the model, as is the case when composite likelihood is used. In such circumstances tests and confidence regions for the parameter of interest may be constructed using Wald type and score type statistics, defined so as to account for nuisance parameter estimation or partial specification of the likelihood. In this paper a general analytical expression for the required asymptotic covariance matrices is derived, and suggestions for obtaining Monte Carlo approximations are presented. The same matrices are involved in a rescaling adjustment of the log likelihood ratio type statistic that we propose. This adjustment restores the usual chi-squared asymptotic distribution, which is generally invalid after the simplifications considered. The practical implication is that, for a wide variety of likelihoods and nuisance parameter estimates, confidence regions for the parameters of interest are readily computable from the rescaled log likelihood ratio type statistic as well as from the Wald type and score type statistics. Two examples, a measurement error model with full likelihood and a spatial correlation model with pairwise likelihood, illustrate and compare the procedures. Wald type and score type statistics may give rise to confidence regions with unsatisfactory shape in small and moderate samples. In addition to having satisfactory shape, regions based on the rescaled log likelihood ratio type statistic show empirical coverage in reasonable agreement with nominal confidence levels.

The odds ratio (OR) is a measure of association used for analysing an *I* × *J* contingency table. The total number of ORs to check grows with *I* and *J*. Several statistical methods have been developed for summarising them. These methods begin from two different starting points, the *I* × *J* contingency table and the two-way table composed by the ORs. In this paper we focus our attention on the relationship between these methods and point out that, for an exhaustive analysis of association through log ORs, it is necessary to consider all the outcomes of these methods. We also introduce some new methodological and graphical features. In order to illustrate previously used methodologies, we consider a data table of the cross-classification of the colour of eyes and hair of 5387 children from Scotland. We point out how, through the log OR analysis, it is possible to extract useful information about the association between variables.

The traditional and readily available multivariate analysis of variance (MANOVA) tests such as Wilks' Lambda and the Pillai–Bartlett trace start to suffer from low power as the number of variables approaches the sample size. Moreover, when the number of variables exceeds the number of available observations, these statistics are not available for use. Ridge regularisation of the covariance matrix has been proposed to allow the use of MANOVA in high-dimensional situations and to increase its power when the sample size approaches the number of variables. In this paper two forms of ridge regression are compared to each other and to a novel approach based on lasso regularisation, as well as to more traditional approaches based on principal components and the Moore-Penrose generalised inverse. The performance of the different methods is explored via an extensive simulation study. All the regularised methods perform well; the best method varies across the different scenarios, with margins of victory being relatively modest. We examine a data set of soil compaction profiles at various positions relative to a ridgetop, and illustrate how our results can be used to inform the selection of a regularisation method.

Sparse principal components analysis (SPCA) is a technique for finding principal components with a small number of non-zero loadings. Our contribution to this methodology is twofold. First we derive the sparse solutions that minimise the least squares criterion subject to sparsity requirements. Second, recognising that sparsity is not the only requirement for achieving simplicity, we suggest a backward elimination algorithm that computes sparse solutions with large loadings. This algorithm can be run without specifying the number of non-zero loadings in advance. It is also possible to impose the requirement that a minimum amount of variance be explained by the components. We give thorough comparisons with existing SPCA methods and present several examples using real datasets.