Cluster without fluster: The effect of correlated outcomes on inference in randomized clinical trials

Authors


  • This article is a U.S. Government work and is in the public domain in the U.S.A.

Abstract

Inference for randomized clinical trials is generally based on the assumption that outcomes are independently and identically distributed under the null hypothesis. In some trials, particularly in infectious disease, outcomes may be correlated. This may be known in advance (e.g. allowing randomization of family members) or completely unplanned (e.g. sexual sharing among randomized participants). There is particular concern when the form of the correlation is essentially unknowable, in which case we cannot take advantage of the correlation to construct a more efficient test. Instead, we can only investigate the impact of potential correlation on the independent-samples test statistic. Randomization tends to balance out treatment and control assignments within clusters, so it is logical that performance of tests averaged over all possible randomization assignments would be essentially unaffected by arbitrary correlation. We confirm this intuition by showing that a permutation test controls the type 1 error rate in a certain average sense whenever the clustering is independent of treatment assignment. It is nonetheless possible to obtain a ‘bad’ randomization such that members of a cluster tend to be assigned to the same treatment. Conditioned on such a bad randomization, the type 1 error rate is increased. Published in 2007 by John Wiley & Sons, Ltd.

Ancillary