Implementing a decision-theoretic design in clinical trials: Why and how?


  • Christopher R. Palmer,

    Corresponding author
    1. Centre for Applied Medical Statistics, University of Cambridge, Cambridge, U.K.
    • Centre for Applied Medical Statistics (CAMS), Department of Public Health and Primary Care, University of Cambridge, Institute of Public Health, Forvie Site, Robinson Way, Cambridge CB2 2SR, U.K.
    Search for more papers by this author
  • Harutyun Shahumyan

    1. Centre for Applied Medical Statistics, University of Cambridge, Cambridge, U.K.
    Search for more papers by this author


This paper addresses two main questions: first, why should Bayesian and other innovative, data-dependent design models be put into practice and, secondly, given the past dearth of actual applications, how might one example of such a design be implemented in a genuine example trial?

Clinical trials amalgamate theory, practice and ethics, but this last point has become relegated to the background, rather than taking often a more appropriate primary role. Trial practice has evolved but has its roots in R. A. Fisher's randomized agricultural field trials of the 1920s. Reasons for, and consequences of, this are discussed from an ethical standpoint, drawing on an under-used dichotomy introduced by French authors Lellouch and Schwartz (Int. Statist. Rev. 1971; 39:27–36). Plenty of ethically motivated designs for trials, including Bayesian designs have been proposed, but have found little application thus far. One reason for this is a lack of awareness of such alternative designs among trialists, while another reason is a lack of user-friendly software to allow study simulations.

To encourage implementation, a new C++ program called ‘Daniel’ is introduced, offering much potential to assist the design of today's randomized controlled trials. Daniel evaluates a particular decision-theoretic method suitable for coping with either two or three Bernoulli response treatments with input features allowing user-specified choices of: patient horizon (number to be treated before and after the comparative stages of the trial); an arbitrary fixed trial truncation size (to allow ready comparison with traditional designs or to cope with practical constraints); anticipated success rates and a measure of their uncertainty (a matter ignored in standard power calculations); and clinically relevant, and irrelevant, differences in treatment effect sizes. Error probabilities and expected trial durations can be thoroughly explored via simulation, it being better by far to harm ‘computer patients’ instead of real ones.

Suppose the objective in a clinical trial is to select between two treatments using a maximum horizon of 500 patients, when the truly superior treatment is expected to yield a 40 per cent success rate, but is believed to really range between 20 and 60 per cent. Simulation studies show that to detect a clinically relevant, absolute difference of 10 per cent between treatments, simulation studies show the decision-theoretic procedure would treat a mean 68 pairs of patients (SD 37) before correctly identifying the better treatment 96.7 per cent of the time, an error rate of 3.3 per cent. Having made a recommendation based on these patients, the remaining, on average 364 individuals, could either be given the indicated treatment, knowing its choice is optimal for the chosen horizon, or, alternatively, they could be entered into another, separate clinical trial. For comparison, a fixed sample size trial, with standard 5 per cent level of significance and 80 per cent power to detect a 10 per cent difference, requires treating over 700 patients in two groups, with the half allocated to inferior treatment considerably outnumbering the 68 expected under the decision-theoretic design, and the overall number simply too high for realistic application.

In brief, the keys to answering the above ‘why?’ and ‘how?’ questions are ethics and software, respectively. Wider implications, both pros and cons, of implementing the particular method described will be discussed, with the overall conclusion that, where appropriate, clinical trials are now ready to undergo modernization from the agricultural age to the information age. Copyright © 2007 John Wiley & Sons, Ltd.