Volume 55, Issue 1
Original Article

Automatic Item Generation: A More Efficient Process for Developing Mathematics Achievement Items?

First published: 01 March 2018
Citations: 2

Abstract

The continual supply of new items is crucial to maintaining quality for many tests. Automatic item generation (AIG) has the potential to rapidly increase the number of items that are available. However, the efficiency of AIG will be mitigated if the generated items must be submitted to traditional, time‐consuming review processes. In two studies, generated mathematics achievement items were subjected to multiple stages of qualitative review for measuring the intended skills, followed by empirical tryout in operational testing. High rates of success were found. Further, items generated from the same item structure had predictable psychometric properties. Thus, the feasibility of a more limited and expedient review processes was supported. Additionally, positive results were obtained on measuring the same skills from item structures with reduced cognitive complexity.

Number of times cited according to CrossRef: 2

  • Automating the Generation of Test Items, Encyclopedia of Organizational Knowledge, Administration, and Technology, 10.4018/978-1-7998-3473-1.ch019, (233-244), (2021).
  • Automatic Question Generator System Conceptual Model for Mathematic and Geometry Parallel Question Replication, Journal of Physics: Conference Series, 10.1088/1742-6596/1577/1/012023, 1577, (012023), (2020).

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.