Task Effects on Linguistic Complexity and Accuracy: A Large‐Scale Learner Corpus Analysis Employing Natural Language Processing Techniques
We want to thank the anonymous Language Learning reviewers for their helpful feedback. Our research was supported as part of the LEAD Graduate School & Research Network [GSC1028], a project of the Excellence Initiative of the German federal and state governments, and by grants ANR‐11‐LABX‐0036 (BLRI) and ANR‐11‐IDEX‐0001‐02 (A*MIDEX). We also gratefully acknowledge the support of EF Education First through the sponsorship of the EF Research Lab for Applied Language Learning at the University of Cambridge.
Abstract
Large‐scale learner corpora collected from online language learning platforms, such as the EF‐Cambridge Open Language Database (EFCAMDAT), provide opportunities to analyze learner data at an unprecedented scale. However, interpreting the learner language in such corpora requires a precise understanding of tasks: How does the prompt and input of a task and its functional requirements influence task‐based linguistic performance? This question is vital for making large‐scale task‐based corpora fruitful for second language acquisition research. We explore the issue through an analysis of selected tasks in EFCAMDAT and the complexity and accuracy of the language they elicit.
Number of times cited: 3
- Virginia Samuda, Martin Bygate and Kris Van den Branden, Introduction, TBLT as a Researched Pedagogy, 10.1075/tblt.12.01sam, (2-284)
- Patrick Rebuschat, Detmar Meurers and Tony McEnery, Language Learning Research at the Intersection of Experimental, Computational, and Corpus‐Based Approaches, Language Learning, 67, S1, (6-13), (2017).
- Detmar Meurers and Markus Dickinson, Evidence and Interpretation in Language Learning Research: Opportunities for Collaboration With Computational Linguistics, Language Learning, 67, S1, (66-95), (2017).




