Jing-mei Chung (M.A. and M.S. Ed., The University of Kansas) is a lecturer of English at Ming-Hsin Institute of Technology & Commerce, Hsin-Feng, Hsin-Chu, Taiwan.
A Comparison of Two Multiple-Choice Test Formats for Assessing English Structure Competence
Article first published online: 31 DEC 2008
© 1997 American Council on the Teaching of Foreign Languages
Foreign Language Annals
Volume 30, Issue 1, pages 111–122, March 1997
How to Cite
Chung, J.-m. (1997), A Comparison of Two Multiple-Choice Test Formats for Assessing English Structure Competence. Foreign Language Annals, 30: 111–122. doi: 10.1111/j.1944-9720.1997.tb01321.x
- Issue published online: 31 DEC 2008
- Article first published online: 31 DEC 2008
ABSTRACT This study uses item analysis techniques to compare two multiple-choice test formats, referred to here as Form A and Form B, for assessing student competence in English structure in terms of mean scores, item difficulty, and item discrimination. Data for this analysis are derived from test answer sheets completed by 239 students, which included 144 college students and 95 senior high school students. Form A is composed of 20 “sore finger” test items, which are randomly selected from previous TOEFL papers, and is an error-detection task. The 20 test items on Form B, which are exactly the same sentences as those on Form A, are traditional multiple-choice items. The results of the study show that the mean score of Form B is significantly higher than that of Form A. It means that Form B is much easier than Form A for the subjects of the study. It is therefore suggested that Form B may be more suitable for younger or lower-level students so as to inspire confidence in students, while Form A may be more appropriate for higher-level students to train their error detection skills in grammar. Examinations of individual items reveal that difficulty and discrimination aspects are item-specific rather than format-dependent, and items with moderate difficulty indices tend to have better discrimination power.