Comparing diagnostic tests: trials in people with discordant test results

Authors

  • R. Hooper,

    Corresponding author
    • Centre for Primary Care and Public Health, Queen Mary University of London, London E1 2AB, U.K.
    Search for more papers by this author
    • Senior Lecturer in Medical Statistics

  • K. Díaz-Ordaz,

    1. Centre for Primary Care and Public Health, Queen Mary University of London, London E1 2AB, U.K.
    Search for more papers by this author
    • §

      Research Fellow in Medical Statistics

  • A. Takeda,

    1. Centre for Primary Care and Public Health, Queen Mary University of London, London E1 2AB, U.K.
    Search for more papers by this author
    • Ststematic Reviewer

  • K. Khan

    1. Centre for Primary Care and Public Health, Queen Mary University of London, London E1 2AB, U.K.
    Search for more papers by this author
    • Professor of Women's Health and Clinical Epidemiology


Correspondence to: R. Hooper, Centre for Primary Care and Public Health, Queen Mary University of London, Yvonne Carter Building, 58 Turner Street, London E1 2AB, U.K.

E-mail: r.l.hooper@qmul.ac.uk

Abstract

Diagnostic tests are traditionally compared for accuracy against a gold standard but can also be compared prospectively in a trial. A conventional trial comparing two tests would randomize each participant to a testing strategy, but a more efficient alternative is to give both tests to all participants and follow up those with discordant results. Participants could be randomized before or after testing. The statistical analysis of such a trial has not previously been described. We investigated two estimates of the risk difference for a binary outcome: one based on analysing outcomes as if from a conventional trial and one combining estimates of different parameters in the manner of a decision analysis. We show that the trial estimate and decision analysis estimate are both unbiased and derive approximate formulae for their standard errors. By using the decision analysis estimate (but not the trial estimate), the same precision can be achieved by randomizing before testing as by randomizing after. To avoid destroying equipoise, and to allow consenting and randomizing to be carried out at the same visit, we recommend randomizing before testing. Giving both tests to all participants means fewer need to be recruited: in one example from the literature, the proposed design was nearly four times more efficient in this sense than a conventional trial design. Copyright © 2012 John Wiley & Sons, Ltd.

Ancillary