The full text of this article hosted at iucr.org is unavailable due to technical difficulties.

Original Article

Exploring Proficiency‐Based vs. Performance‐Based Items With Elicited Imitation Assessment

Jacob Burdis

Missionary Training Center, Church of Jesus Christ of Latter‐day Saints

Search for more papers by this author
First published: 27 August 2015
Cited by: 1

Troy L. Cox (PhD, Brigham Young University) is Associate Director of Research and Assessment, Center for Language Studies at Brigham Young University, Provo, Utah.

Jennifer Bown (PhD, The Ohio State University) is Associate Professor of Russian, Brigham Young University, Provo, Utah.

Jacob Burdis (PhD, Brigham Young University) is Senior Language Product Manager, Missionary Training Center, Church of Jesus Christ of Latter‐day Saints, Provo, Utah.

Abstract

This study investigates the effect of proficiency‐ vs. performance‐based elicited imitation (EI) assessment. EI requires test‐takers to repeat sentences in the target language. The accuracy at which test‐takers are able to repeat sentences highly correlates with test‐takers' language proficiency. However, in EI, the factors that render an item more complex are still being investigated. In order to investigate whether item difficulty and test performance were different between proficiency‐ and performance‐based tests, two EI instruments were created—one to measure proficiency with items from a general corpus and another to measure language for specific purposes (LSP) performance with items from a domain‐specific corpus. The two instruments were then administered to 98 subjects of varying proficiency. The mean score for the LSP performance test (urn:x-wiley:14381656:media:flan12152:flan12152-math-0001 = 0.51) was significantly higher than the mean score for the proficiency test (urn:x-wiley:14381656:media:flan12152:flan12152-math-0002 = 0.44, p < 0.001). In addition, item difficulties for the LSP items were significantly lower than item difficulties for the general items (p < 0.05), indicating that the content of the EI items affected item difficulty. Data suggest that the two approaches to EI assess different constructs and cannot be used interchangeably.

Number of times cited: 1

  • , Understanding Intermediate‐Level Speakers’ Strengths and Weaknesses: An Examination of OPIc Tests From Korean Learners of English, Foreign Language Annals, 50, 1, (84-113), (2017).