SEARCH

SEARCH BY CITATION

Keywords:

  • finite mixtures;
  • mixed model;
  • non-Gaussian leading term;
  • saddlepoint approximation;
  • stochastic networks;
  • time series

Abstract.  For certain classes of hierarchical models, it is easy to derive an expression for the joint moment-generating function (MGF) of data, whereas the joint probability density has an intractable form which typically involves an integral. The most important example is the class of linear models with non-Gaussian latent variables. Parameters in the model can be estimated by approximate maximum likelihood, using a saddlepoint-type approximation to invert the MGF. We focus on modelling heavy-tailed latent variables, and suggest a family of mixture distributions that behaves well under the saddlepoint approximation (SPA). It is shown that the well-known normalization issue renders the ordinary SPA useless in the present context. As a solution we extend the non-Gaussian leading term SPA to a multivariate setting, and introduce a general rule for choosing the leading term density. The approach is applied to mixed-effects regression, time-series models and stochastic networks and it is shown that the modified SPA is very accurate.