ABSTRACT  This study uses item analysis techniques to compare two multiple-choice test formats, referred to here as Form A and Form B, for assessing student competence in English structure in terms of mean scores, item difficulty, and item discrimination. Data for this analysis are derived from test answer sheets completed by 239 students, which included 144 college students and 95 senior high school students. Form A is composed of 20 “sore finger” test items, which are randomly selected from previous TOEFL papers, and is an error-detection task. The 20 test items on Form B, which are exactly the same sentences as those on Form A, are traditional multiple-choice items. The results of the study show that the mean score of Form B is significantly higher than that of Form A. It means that Form B is much easier than Form A for the subjects of the study. It is therefore suggested that Form B may be more suitable for younger or lower-level students so as to inspire confidence in students, while Form A may be more appropriate for higher-level students to train their error detection skills in grammar. Examinations of individual items reveal that difficulty and discrimination aspects are item-specific rather than format-dependent, and items with moderate difficulty indices tend to have better discrimination power.