Standard Article

Discourse Analysis in Language Assessment

  1. Anne Lazaraton

Published Online: 5 NOV 2012

DOI: 10.1002/9781405198431.wbeal0321

The Encyclopedia of Applied Linguistics

How to Cite

Lazaraton, A. 2012. Discourse Analysis in Language Assessment. The Encyclopedia of Applied Linguistics. .

Publication History

  1. Published Online: 5 NOV 2012

Abstract

When second language (L2) learners take a language test, they are really only interested in one outcome: their scores. Likewise, language testers have been preoccupied with ensuring that these scores, which are the product of the assessment, are reliable and valid. In L2 performance assessment (where L2 learners produce speech or writing for assessment), the primary challenge is to ensure consistent ratings of language. Much work in oral language assessment (the focus of this entry) has been (and continues to be) devoted to ensuring both inter-rater reliability (the degree to which two or more raters agree on scores) and intra-rater reliability (or the degree to which one rater's scores are consistent across language samples and over time). In more recent years, issues of test validity (i.e., the process of making inferences from test results; see Messick, 1989; Moss, 1994) have emerged as a central concern for the language assessment community. This is particularly true for L2 speaking tests, where the interests of some scholars have shifted from a focus on rating consistency to the process of speaking assessment itself. That is, what typifies the language that interlocutors produce in speaking tests? And how can descriptions of that language feed into reliable and valid rating of test talk? Discourse analysis has proven to be a very promising method for discovering answers to these questions.

Keywords:

  • assessment methods in applied linguistics;
  • discourse analysis;
  • assessment