Time index-ordered random variables are said to be antedependent (AD) of order (p1,p2, … ,pn) if the kth variable, conditioned on the pk immediately preceding variables, is independent of all further preceding variables. Inferential methods associated with AD models are well developed for continuous (primarily normal) longitudinal data, but not for categorical longitudinal data. In this article, we develop likelihood-based inferential procedures for unstructured AD models for categorical longitudinal data. Specifically, we derive maximum likelihood estimators (MLEs) of model parameters; penalized likelihood criteria and likelihood ratio tests for determining the order of antedependence; and likelihood ratio tests for homogeneity across groups, time invariance of transition probabilities, and strict stationarity. We give closed-form expressions for MLEs and test statistics, which allow for the possibility of empty cells and monotone missing data, for all cases save strict stationarity. For data with an arbitrary missingness pattern, we derive an efficient restricted expectation–maximization algorithm for obtaining MLEs. We evaluate the performance of the tests by simulation. We apply the methods to longitudinal studies of toenail infection severity (measured on a binary scale) and Alzheimer's disease severity (measured on an ordinal scale). The analysis of the toenail infection severity data reveals interesting nonstationary behavior of the transition probabilities and indicates that an unstructured first-order AD model is superior to stationary and other structured first-order AD models that have previously been fit to these data. The analysis of the Alzheimer's severity data indicates that the antedependence is second order with time-invariant transition probabilities, suggesting the use of a second-order autoregressive cumulative logit model. Copyright © 2013 John Wiley & Sons, Ltd.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.