Diagnostic tests are traditionally compared for accuracy against a gold standard but can also be compared prospectively in a trial. A conventional trial comparing two tests would randomize each participant to a testing strategy, but a more efficient alternative is to give both tests to all participants and follow up those with discordant results. Participants could be randomized before or after testing. The statistical analysis of such a trial has not previously been described. We investigated two estimates of the risk difference for a binary outcome: one based on analysing outcomes as if from a conventional trial and one combining estimates of different parameters in the manner of a decision analysis. We show that the trial estimate and decision analysis estimate are both unbiased and derive approximate formulae for their standard errors. By using the decision analysis estimate (but not the trial estimate), the same precision can be achieved by randomizing before testing as by randomizing after. To avoid destroying equipoise, and to allow consenting and randomizing to be carried out at the same visit, we recommend randomizing before testing. Giving both tests to all participants means fewer need to be recruited: in one example from the literature, the proposed design was nearly four times more efficient in this sense than a conventional trial design. Copyright © 2012 John Wiley & Sons, Ltd.