- 1University of Innsbruck, Innsbruck, Austria
- 2International Association for the Evaluation of Educational Achievement, Amsterdam, Netherlands
Editorial on the Research Topic
How do we collect all this data? A performative account of International Large-Scale Assessment data collection in times of systemic diversity
International Large-Scale Assessments (ILSAs) aim to provide evidence-based, comparable information about the status of educational systems. The goal is to monitor and benchmark achievement by providing participating countries with accurate, valid and reliable data, and to identify systemic challenges. Participation in studies such as TIMSS, PIRLS and PISA is guided by well-documented and standardized processes and procedures. These measures result in nationally representative data on participants' achievements and contextual backgrounds (von Davier et al., 2023).
Participation in each cycle of assessment does not involve a mechanical copy or repetition of the study from one cycle to the next (Combrinck and van Staden, 2023). Instead, each new cycle is administered as a large-scale cross-sectional survey with country-level adjustments in terms of design and sampling that are closely monitored, with a view to comparability over time. These adjustments ensure that, as far as possible, the overall results accurately reflect the academic achievement of participating students and provide a (selective) picture of their national, school, classroom and home backgrounds for a particular cycle of participation.
Reporting of standardized operating procedures, and the intention of highly standardized practices in administering ILSAs are well-documented. So too are achievement reports, both nationally and internationally that provide a track record of countries' educational attainment (see for example Džumhur et al., 2025). What is less pronounced for reporting purposes is the “black box” of how the intention of standardized processes plays out for each participating country during data collection. The data for each country is subject to the specific conditions under which it was collected, i.e., the feasibility of nationally representative samples (strata, languages of testing, populations, etc), and questions to be asked to understand how accurately data reflects reality. However, our consideration is not primarily about the reliability, validity and credibility of ILSA data as a result of standardized procedures. Rather, we make an argument for the process of data collection that plays a pivotal role in translating high degrees of standardization into implementation at each country level. In this way, data collection in and of itself tell us something about the context of the education system in which ILSAs are administered.
This Research Topic aims to describe country level data collection processes, challenges and considerations as graphically illustrated below.
Figure 1 shows how standardized construction of a study like PIRLS set out to create guidelines to ensure reliable, valid and an objective construction of tests and procedures. The country-level “black box” accounts of data collection as a resource, a discovery and an assumption on the operational side of ILSAs are crucial to the creation, curation and compilation of ongoing systems for educational monitoring and benchmarking. In doing so, a crucial element of not only what we collect, but how it is collected, decisions about design and implementation challenges provide a glimpse into an underrepresented area of scholarly work that tends to focus on data outcomes and results.
Two Southern Hemisphere perspectives from New Zealand and South Africa provide test administration insights. While the Research Topic includes a conceptual analysis of the prospects to develop culturally inclusive models for education by Urrutia-Jorde, the New Zealand example (Chamberlain and Bennett) speaks to issues of cultural inclusivity by detailing decisions about languages of testing in small, marginalized populations. New Zealand's experience serves as an example of a post-colonial country that is negotiating the need and want to be educationally inclusive, but with questions about whether smaller populations meet ILSA's rigorous participation standards. The South African case by Roux highlights test administration from a developing, socio-economically, multicultural and multi-lingual context. Despite significant advancements in administrative capacity, stakeholder engagement, and methodological rigor, persistent issues such as resource constraints, infrastructural disparities, and lack of skilled capacity pose administration challenges.
From the Northern Hemisphere, Ireland and Italy describe their investments in increasing response rates. The Irish perspective by Delaney et al. details efforts to promote participation and engagement through initiatives that places consultation, promotion, and support for studies like PIRLS as its center. Italy's review of existing literature highlights student motivation as central to low stakes assessments, while their empirical experience points to customized communication strategies, logistical support, and privacy safeguards—particularly in engaging with parents—as essential for securing engagement and trust (Palmerio and Caponera). A contribution of original research from Latvia (Kampmane et al.) paints a picture of optimal conditions to ensure voluntary participation in controlled samples and data from this study make the observation that the teacher, principal and school coordinator motivation to complete questionnaires are highly affected by their perception of the meaningfulness and relevance of the survey to their day-to-day education activities.
This Research Topic of articles is based on the idea that ILSA can identify systemic problems and monitor and benchmark achievement, providing participating countries with accurate, valid and reliable data. Well-documented, standardized processes and procedures, that result in nationally representative achievement and contextual background data for participants, are of the utmost importance to achieve international comparability. Yet, the intention of standardized processes and procedures must be counter balanced with strong institutional support with thoughtful, flexible implementation strategies to maintain high levels of participation across all countries, sampled populations and groups of respondents.
Author contributions
SV: Writing – original draft, Writing – review & editing, Methodology, Project administration, Conceptualization. CK: Project administration, Writing – review & editing, Methodology, Conceptualization, Writing – original draft. PK: Project administration, Writing – original draft, Methodology, Writing – review & editing, Conceptualization.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Combrinck, C., and van Staden, S. (2023). “The complexities of international large-scale assessments in a developing context,” in Tracking Changes in South African Reading Literacy Achievement, eds. S. van Staden and C. Combrinck (Leyden, IL: Brill), 1–16. doi: 10.1163/9789004687011_001
Džumhur, Ž., Koršnáková, P., and Meinck, S. (2025). “Dinaric perspectives on PIRLS 2021: prerequisites and conditions for teaching and learning to read,” in IEA Research for Education, vol. 17 (Cham: Springer). Available online at: https://link.springer.com/book/10.1007/978-3-031-88002-5#bibliographic-information (Accessed July 12, 2025).
von Davier, M., Mullis, I. V. S., Fishbein, B., and Foy, P. (Eds.). (2023). Methods and Procedures: PIRLS 2021 Technical Report. Boston College, TIMSS and PIRLS International Study Center. Available online at: https://pirls2021.org/methods (Accessed December 8, 2025).
Keywords: data collection, implementation, International Large-Scale Assessments (ILSA), progress in international reading literacy study, standardized procedures
Citation: Van Staden S, Kraler C and Korsnakova P (2026) Editorial: How do we collect all this data? A performative account of International Large-Scale Assessment data collection in times of systemic diversity. Front. Educ. 10:1764036. doi: 10.3389/feduc.2025.1764036
Received: 09 December 2025; Accepted: 15 December 2025;
Published: 09 January 2026.
Edited and reviewed by: Gavin T. L. Brown, The University of Auckland, New Zealand
Copyright © 2026 Van Staden, Kraler and Korsnakova. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Surette Van Staden, c3VyZXR0ZS52YW4tc3RhZGVuQHVpYmsuYWMuYXQ=