ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. AI for Human Learning and Behavior Change
Artificial Test-Takers as Transformed Controls: Measuring SAT Difficulty Drift and Student Performance
University of Cincinnati, Cincinnati, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Abstract
Abstract Introduction: Standardized test score trends are widely used to track student performance and inform policy, but they are difficult to interpret when exam content changes over time. We introduce an artificial test-taker framework that uses a fixed large language model as a stable benchmark to measure SAT Math difficulty drift and construct difficulty-adjusted measures of student performance. Methods: We built a longitudinal SAT Math item bank from SATs spanning 2007–2023. For each year, we generated 50 bootstrapped SAT forms that match the year-specific section blueprint and administered all items to GPT-4 under a fixed set of parameters and training to develop counterfactuals. We combine our difficulty benchmarks with national SAT Math scores released by the College Board to assess robustness to compositional changes. Results: The artificial test-taker framework indicates a statistically significant decline in SAT Math difficulty of 0.21σ relative to 2012. After adjusting for test difficulty using the transformed-control benchmark, student performance declines by 34 points in Average Difference in Scores (ADS) from 2012 to 2023. Heterogeneity analyses show that these declines are not uniform across racial groups. Discussion: Artificial test-takers provide a scalable, protocol-invariant audit of longitudinal comparability when traditional equating is infeasible, opaque, or incomplete. Our findings imply that evolving SAT Math content can mask substantial underlying performance decline and can differentially obscure trends across student subgroups. More broadly, transformed-control designs using AI offer a tool for benchmarking educational outcomes and for separating changes in measured performance from changes in the measurement instrument itself.
Summary
Keywords
artificial intelligence, Artificial Test-Takers, Large language models, standardized testing, Transformed Control
Received
25 August 2025
Accepted
20 February 2026
Copyright
© 2026 K Suresh and Rawat. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Vikram K Suresh
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.