Automatic Item Generation: A More Efficient Process for Developing Mathematics Achievement Items?
Abstract
The continual supply of new items is crucial to maintaining quality for many tests. Automatic item generation (AIG) has the potential to rapidly increase the number of items that are available. However, the efficiency of AIG will be mitigated if the generated items must be submitted to traditional, timeāconsuming review processes. In two studies, generated mathematics achievement items were subjected to multiple stages of qualitative review for measuring the intended skills, followed by empirical tryout in operational testing. High rates of success were found. Further, items generated from the same item structure had predictable psychometric properties. Thus, the feasibility of a more limited and expedient review processes was supported. Additionally, positive results were obtained on measuring the same skills from item structures with reduced cognitive complexity.
Citing Literature
Number of times cited according to CrossRef: 2
- Hollis Lai, Mark Gierl, Automating the Generation of Test Items, Encyclopedia of Organizational Knowledge, Administration, and Technology, 10.4018/978-1-7998-3473-1.ch019, (233-244), (2021).
- A A B Prasetyanto, T B Adji, I Hidayah, Automatic Question Generator System Conceptual Model for Mathematic and Geometry Parallel Question Replication, Journal of Physics: Conference Series, 10.1088/1742-6596/1577/1/012023, 1577, (012023), (2020).




