Summary. Misclassification of exposure variables is a common problem in epidemiologic studies. This paper compares the matrix method (Barren, 1977, Biometrics33, 414–418; Greenland, 1988a, Statistics in Medicine7, 745–757) and the inverse matrix method (Marshall, 1990, Journal of Clinical Epidemiology43, 941–947) to the maximum likelihood estimator (MLE) that corrects the odds ratio for bias due to a misclassified binary covariate. Under the assumption of differential misclassification, the inverse matrix method is always more efficient than the matrix method; however, the efficiency depends strongly on the values of the sensitivity, specificity, baseline probability of exposure, the odds ratio, case-control ratio, and validation sampling fraction. In a study on sudden infant death syndrome (SIDS), an estimate of the asymptotic relative efficiency ( ) of the inverse matrix estimate wasO99, while the matrix method's wasO19. Under nondifferential misclassification, neither the matrix nor the inverse matrix estimator is uniformly more efficient than the other; the efficiencies again depend on the underlying parameters. In the SIDS data, the MLE was more efficient than the matrix method ( ). In a study investigating the effect of vitamin A intake on the incidence of breast cancer, the MLE was more efficient than the matrix method ( ).