Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 03 September 2025

Sec. Leadership in Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1616717

If evidence matters, why does the data die? Implementing education management information systems (EMIS) in development contexts

  • Faculty of Arts and Education, University of Auckland, Auckland, New Zealand

Over the last three decades, international organizations have been increasingly involved in collecting and utilizing data to enhance data-driven decision making in the education sectors of development contexts. This is carried out mainly through Education Management Information Systems (EMIS). Research shows, however, that while EMIS have managed to capture large amounts of data in such contexts, evidence of the use of data towards improvement of outcomes is sparse at best, with persistent reports that EMIS data is underutilized nationally. This article relies on documentary data and key informant interviews to explore the reasons for this failure of utility, critically examining the mechanisms underpinning the continued challenges to the utilization of EMIS data in development contexts. The data is analyzed using a reflexive thematic analysis approach from a critical realist lens, interweaving literature, data and inquiry to raise critical questions about and offer potential explanations as to why the data dies. This work reveals three profoundly contextual mechanisms operating within the structure of the ‘development project:’ (1) a Eurocentric notion of the generalizability of EMIS systems; (2) an enduring colonial mindset that considers educational data another resource that can be extracted and used; and (3) a devaluation of people from such contexts- as entities with agency and rights - in favor of data.

1 Introduction

“Data is the lifeblood of decision making,” noted a 2014 United Nations (UN) report, as the world prepared to launch the new Sustainable Development Goals (SDGs), explaining that without data “designing, monitoring and evaluating effective policies becomes almost impossible” (p.2). This drive towards data-driven decision-making1 has become ever more apparent in the education sector over the last three decades (Hossain, 2023). During this time, international organizations (IOs) have increasingly been involved in collecting and utilizing massive amounts of data to enhance governance in the education sectors of development contexts.2 This has also enabled IOs to “build ‘smart’ data systems that would inform the decision-making and strategic planning processes of government officials” (Hossain, 2023, p.1), mainly through Education Management Information Systems (EMIS).

EMIS are systems that aid with the collection, analysis, dissemination and utilization of data in and about an education system (Abdul-Hamid, 2014; Abdul-Hamid et al., 2017). Data collected by EMIS systems is meant to support the formulation and implementation of education policies based on evidence, improve policymaking, make education systems more effective and efficient, and promote educational quality and equity (Abdul-Hamid, 2014). Additionally, EMIS data can also support other entities—universities, researchers and IOs such as the United Nations (UN)—to track global education statistics and “[add] value to the discussion surrounding education, [promote] best practices, and [help] introduce new innovative ideas” (Abdul-Hamid, 2017, p. 12). Providing an alternative to existing systems of educational governance, EMIS are, therefore, a form of data governance (Hossain, 2023) which relies on data and ‘evidence’ to tell us “what works” rather than on speculations or ideology (Parkhurst, 2017).

Since EMIS data is expected to supply reliable and accurate information to varied stakeholders in the education sector, EMIS have been widely promoted in the educational development projects of IOs, such as the World Bank (WB) and United Nations Educational, Scientific and Cultural Organization (UNESCO) (Abdul-Hamid et al., 2017; Iyengar et al., 2016; Riddell and Niño-Zarazúa, 2016; UNESCO and Global Partnership for Education, 2020). Successful EMIS ‘activities’ in development contexts have also encouraged further support for EMIS implementation projects. These ‘activities’ include the use of EMIS to strengthen teaching and learning in Guatemala and Lithuania; its utilization as a management tool in Bosnia, Herzegovina, and Malaysia; its role in planning in Nigeria; and the creation of an online EMIS to improve access to education data in Honduras amongst others3 (Abdul-Hamid et al., 2017). However, despite the significant prioritization and investment in EMIS by the WB, UNESCO and others, we know little about the potential outcomes of EMIS implementation beyond specific “project” outcomes or the extent of their ‘success’ in the long term (Riddell and Niño-Zarazúa, 2016).

Research shows that while EMIS has managed to capture large amounts of data from many Global South jurisdictions, evidence of the use of this data towards improvement of outcomes for the system is sparse at best, revealing persistent issues with regards to the utility of EMIS data across context and governance models (Abdul-Hamid, 2017; Abdul-Hamid et al., 2017; Hossain, 2023; Mintz and Saraogi, 2015a, 2015b; Rossiter, 2020; Saraogi et al., 2017a; UNESCO Institute for Statistics, 2017b; UNESCO and Global Partnership for Education, 2020). In fact, examples of projects improving data collection but failing to improve data use range across contexts, regions and governance structures with examples extending from Timor Leste to Ghana, India and Ethiopia (Abdul-Hamid et al., 2017; Rossiter, 2020). It is clear that governments can and are making substantial investments in gathering statistical data, but these are driven by top-down requirements of governmental bodies or from development partners and often have minimal returns nationally (OECD, 2017; Rossiter, 2020). In many cases, the data is used for financial accountability, to support further funding mechanisms, or to convey the country’s progress in global initiative commitments (Tolley and Shulruf, 2009) rather than for internal analysis and reflection to determine the root causes of recurring problems, or to measure and analyze the impact of initiatives (Rossiter, 2020; Tolley and Shulruf, 2009). Moreover, many education ministries in such contexts lack the necessary skills and systems for data integration—a challenge identified as a main stumbling block in settings as varied as Latvia, Eritrea, Mauritania, and Vietnam (Abdul-Hamid et al., 2017; Rossiter, 2020). As a result, data remains fragmented and access highly restricted, both nationally and internationally, leading to minimal usage (OECD, 2017; Rossiter, 2020).

The question this article addresses is deceptively simple: if evidence is central to decision-making, why does utilizing the ‘evidence’ EMIS produce continue to be a challenge, especially nationally? In this paper, I critically examine some of the fundamental reasons for the “widespread deficiencies” in the utilization of EMIS data, beyond the availability of funding, technology or standards and mechanisms to institutionalize and operationalize the EMIS (Abdul-Hamid et al., 2017, p. xiv). Such challenges, and the efforts to address them, are frequently discussed in the literature (Abdul-Hamid, 2014; Abdul-Hamid et al., 2017; Mintz and Saraogi, 2015a, 2015b; Saraogi et al., 2017a, 2017b; UNESCO Institute for Statistics, 2017b; UNESCO and Global Partnership for Education, 2020), yet they remain. This paper offers a new perspective by examining the underlying mechanisms for EMIS’s limited success in terms of data utilization within development contexts.

The term utilization has multiple meanings. In this paper, utilization refers to the realities of the system implementation and the effective usage of EMIS data for decision-making in practice, i.e., operational use by the education stakeholders to inform policy and practice (Abdul-Hamid, 2014, 2017; Abdul-Hamid et al., 2017). Utilization involves consideration of the value of the EMIS and its data, and who draws value from that utilization. For the main education stakeholders, the value of EMIS and its data centers around their domestic use for evaluation and governance by governmental entities, schools and other local ‘clients’ such as parents, communities and even private sector actors (Abdul-Hamid, 2017). For multilateral development organizations (e.g., World Bank, UNESCO) and bilateral agencies (e.g., donor agencies), utilization often also includes research, benchmarking against other nations and international standards, securing funds for educational improvement, and/or providing assistance and resources (Abdul-Hamid, 2017).

To critically examine the potential reasons why “what works” failed to do so, I rely on two main sources of information: documentary data and interviews with key informants. I have analyzed my data using a reflexive thematic analysis approach from a critical realist lens. In addition to providing a theoretical perspective (and aligning with my theoretical position), critical realism (CR) provides the article’s conceptual and structural framework. CR views reality as a “depth ontology” in which our knowledge develops in inverse order, from empirical events and experiences to the actual and then the real,4 gradually exposing the underlying structures, powers and generative mechanisms within those events5 (de Souza, 2014). The article is structured to reflect that inverse order, beginning with an analysis of the documentary data on the reasons for the failure of utility presented as the empirical level of knowledge, followed by three subsequent sections delving progressively deeper into the structures and mechanisms behind that failure. The shape of the discussion in each section also aligns with CR, seeking causal explanations in a process of ‘macro regress’ to the social structures in which these events are positioned (de Souza, 2014, p. 149), drawing on multiple conceptual frameworks as we move among and between strata, and inviting reflection rather than offering definitive explanations, answers or solutions. Consequently, instead of the traditional deductive format, each section’s argument develops inductively, building on an analysis of relevant literature and documentary data with (and through) the perspectives of the key informants, which are woven into the discussion.

It is important to highlight that this work is less about a methodology of from research question to discussion answer, and more about a journey of discovery into the reasons for the failure of utility of EMIS, employing writing as a method of enquiry (Molinari, 2024; Richardson and St. Pierre, 2005). It is an interweaving of literature, data and inquiry where I do not seek to resolve the question at the crux of this work—why EMIS data is not used—but to raise (and perhaps wrestle with) potential explanations for and critical questions about that lack of usage. Throughout this article, I grapple with the areas of ambiguity that arise from an analysis of the data, connecting the failure of utility to the façades, infrastructures and hierarchies of power that lie beneath the gathering and abandoning of educational data. Relevant literature and explanatory frameworks are embedded throughout to (re)frame and support the critical examination of the insights the data offers on why ‘what works’ did not, in fact, work.

2 Data and methodology

My rationale for the selection of data sources – and their later analysis—is also influenced by a CR approach. First, the IOs of interest were carefully chosen. The World Bank (WB) has led the proliferation of EMIS, serving as the main disseminator through its working papers (Abdul-Hamid, 2014, 2017; Abdul-Hamid et al., 2017) and as the primary provider of development financing in international education since the 1960s (Heyneman, 2003; Abdul-Hamid et al., 2017). UNESCO, mainly through its Institute of Statistics (UIS), is also involved in EMIS – mainly on the level of educational data as part of its mandate to monitor progress towards UN’s Sustainable Development Goal on education (SDG 4) and other global targets (UNESCO Institute for Statistics, 2020). Given their extensive involvement in educational data and EMIS as part of a development agenda, these IOs’ personnel and documentation were ideal data sources for this article.

The first source of data is written documentation in the form of reports, working papers and benchmarking tools, consisting mainly of the WB documents that established the rationale for educational data and the SABER-EMIS framework (Abdul-Hamid, 2014, 2017) and the various WB country analysis and reports (Mintz and Saraogi, 2015a, 2015b; Saraogi et al., 2017a, 2017b) especially the Portfolio Review into the World Bank’s EMIS activities from 1998 to 2014 (Abdul-Hamid et al., 2017).6 This type of grey literature7 is the principal means through which the IOs of interest – the World Bank and UNESCO—express their perspectives on EMIS, often publishing such documents on online platforms that are freely available on the Internet or in public forums. However, to avoid any potential misuse of such data, this study has followed a methodology adapted from Godin et al.’s (2015) application of systematic review search methods to evaluate the suitability of these documents. This methodology involves identifying clear research parameters such as inclusion and exclusion criteria as well as the key terms (see Appendix A for details). The initial literature was retrieved based on these inclusion and exclusion criteria. This literature was then screened by reading the abstracts, executive summaries or overviews to assess each source’s relevance to the utilization of EMIS data, excluding any that did not meet that focus. This allowed for the refining and narrowing of the final set of results. Finally, a close reading of the final selections determined their eligibility and provided the final list of resources (see Appendix B for details).

To further enrich the data, deepening both my understanding and the discussion, I sought key informants from various organizations under the UN umbrella, approaching experts from a range of departments, such as UNICEF, the Human Development Unit at the World Bank, the UN Institute for Statistics (UIS) and others. This approach aligns with purposive sampling, a qualitative research technique that involves the selection of ‘knowledgeable people’—those who have “in depth knowledge about the particular issues, maybe by virtue of their professional role, power, access to networks, expertise or experience” (Cohen et al., 2018, p. 156)—to understand a phenomenon in depth rather than generalize to a population. I interviewed four key informants using a semi-structured interview methodology, which yielded in-depth information about their perceptions of the IO’s promotion and/or implementation of EMIS in development contexts (see Appendix C for the interview protocol). These experts are long-term consultants who have worked on EMIS and educational data for one or more multilateral organizations at both headquarters and in the field for a number of years.8 Two of them have also worked in national statistics bureaus, and one has experience working on data-related projects in the private sector. To protect their anonymity, neither the expert informants nor their organizations are identified by name in this paper. Instead, pseudonyms are used – namely their codes as Expert Informants 1–4 or E1, 2, 3, 4 for ease of reference. No other demographic or professional details are attributed to individual informants to avoid disclosing identifiable information.

To analyze both sets of qualitative data, I employed reflexive thematic analysis (RTA) using a critical realist approach (Bhaskar, 1975, 2008). The key appeal of RTA is that: (1) it has few inherent restraints around data, allowing for the qualitative analysis of both interviews and documentary data (Morgan, 2022); (2) it is a theoretically and methodologically flexible approach that accommodates this article’s ontological and epistemological focus and its research design; and finally (3) it acknowledges that all knowledge is positional, situated and “shaped by the process and practices of knowledge” including those of the researcher, the informants, and the context (Braun and Clarke, 2021, p.70). This last characteristic necessitates “the practice of critical reflection” of the researcher position,9 practice, and process (Braun and Clarke, 2021, p. 55). Consequently, this article’s data analysis was iterative, involving an abductive process of coding, which acknowledged that all data analysis is based on theoretical assumptions and underpinnings while also allowing for unexpected codes and themes to emerge from the data (Braun and Clarke, 2021). The capacity of RTA to support the emergence of these “latent themes”—in which the data analysis explores meaning at the “underlying or implicit level” (Braun and Clarke, 2021, p.66)—is what enables the emergence of the analytic story of this article and accommodates my iterative interactions with and evolving interpretations of the data.

These methodological choices also align with a critical realist ontology (and epistemology), a paradigm that views reality as strata with overlapping empirical, actual, and real layers of reality (Bhaskar, 1975, 2008). In addition to the influence of this paradigm on the shape of the discussion and the design of the article (discussed above), such an ontology corresponds to the study’s explanatory nature in terms of uncovering the underlying but hidden mechanisms that have resulted in the failure to utilize EMIS effectively. It further corresponds to the textured nature of both the analysis and the writing in which the analytic narrative is built through the deliberate layering of literature, data and inquiry. CR also allows the inclusion of “non- or anti-Eurocentric ideas that temper long-standing notions of universality and generalizability” (de Souza, 2014, p. 142) – concepts quite relevant to this article’s argument in both content and form.

3 What went wrong?

In the following sections and subsections, I draw on a critical reading of the documentary data and expert informant interviews (first and second sources of data) to explore the underlying mechanisms for EMIS’s limited success in terms of data utilization. Rather than taking the traditional approach of writing a findings section followed by discussion, I have used the voices of the informants as commentary to my initial data/document search and woven data, analysis and discussion throughout the sections below as I seek an explanation for what went wrong.

3.1 Matters of context, but does context matter?

In this section, I draw upon the documentary data to show existing justifications for the underutilization of EMIS and argue that these justifications remain inadequate, necessitating a deeper examination of the underlying causes.

A useful framing for this section, and the promotion and implementation of EMIS in development contexts in general, is to view EMIS implementation as a form of ‘policy borrowing’ from the Global North to the South10 (Hossain, 2023). In education, policy borrowing refers to adopting policies, practices, or ideas from a successful system in one country or context and implementing them in another to improve educational outcomes or inspire reform. This ‘what-went-right’ analysis justifies transferring best practices from certain high-income countries to development contexts, hoping for similar results (Steiner-Khamsi, 2013). The expected outcomes of EMIS are an increase in the use of ‘evidence’ in policy decisions. As part of this push for ‘evidence,’ the WB has funded 279 projects with EMIS components in 98 countries from 1998 to 201711 (Rossiter, 2020), with the average cost of EMIS development and strengthening activities ranging between 1 to 7 million USD (Abdul-Hamid et al., 2017). Donor support for statistics and statistical capacity has doubled over the last decade (Rossiter, 2020) with “…almost every basic education project in the bank (WB) (having) a component related to the EMIS” as Expert 1 explained, a sentiment shared by the other expert informants.

These efforts, however, have not guaranteed ‘success’ whether that success is measured in terms of data gathering or data usage. A 2017 review of education data availability across 133 low- and middle-income countries (LMICs) revealed that 61 countries still have no available data and 43 have data only at the national level with no disaggregation (Read and Atinc, 2017). Moreover, even with the capacity to produce extensive national data, countries often fail to utilize it effectively, limiting EMIS’s success to data collection rather than improving educational systems (Hossain, 2023). This conclusion echoes the WB’s assessment that many EMIS projects “failed” or “failed to advance” in terms of data utilization (Abdul-Hamid et al., 2017). Nor are the reasons for this failure clear, despite reports identifying the “deficiencies” in individual countries’ EMIS (e.g., Mintz and Saraogi, 2015a, 2015b; Saraogi et al., 2017a) and analyses of the entire WB portfolio of projects from 1998 to 2014 (Abdul-Hamid et al., 2017). So, what went wrong?

In a Portfolio Review into the World Bank’s EMIS activities from 1998 to 2014 (henceforth the Review), Abdul-Hamid et al. (2017) provide a comprehensive analysis of the WB’s 17-year portfolio in the area of EMIS. The Review concludes that although there are examples of successful EMIS, in many contexts, their operational performance did not meet expectations with “widespread deficiencies that ranged from unclear definitions and understanding of EMIS to ineffective implementation and utilization” (p. xiii). There were four key (and for us, relevant) implementation challenges identified by this review are: (1) a misalignment between the “realities” of a country’s EMIS and the intended goals of the project, which included the capacity to implement it, in terms of both human and financial resources;12 (2) the government’s unclear understanding of a the “function” of an EMIS; (3) weak efforts to institutionalize EMIS via policies and legal frameworks to ensure continuity and sustainability; and (4) a focus on the technology rather than the “people and processes” surrounding that technology (p. xiii).

Such challenges seem to be context-specific, and the Review does go on to recommend that a “needs assessment” should be conducted, but it only mentions understanding the context’s EMIS needs in passing before going on to focus on the needs of the EMIS, i.e., identifying the barriers and enablers for the project’s implementation within the context. To deal with the issue of misalignment, the Review recommends that the project team understand the EMIS goals and align them with costs, overall objectives and lastly, “government needs” (p. xvi), which begs the question of what and whose EMIS goals are being served. The Review also points out that” low income, low capacity” countries do require adjustments, but it only recommends adjusting the “scale and scope of the project [emphasis mine]” (p. xvi).13

As for the government’s unclear understanding of the purpose of an EMIS, no real recommendations are made beyond considerations of weak political buy-in, the complexity of the education system in some settings, and a lack of a “data-driven culture” (p.31); the latter of which can take up to 20 years or so to develop in some settings (p. xvii). As for the usage of EMIS data, the Review identifies issues such as poor data quality, time lags that render the data obsolete, as well as the inability to understand the usefulness of EMIS and data in decision-making.14 In fact, the issue of limited use is itself highlighted as a challenge, with the only recommendation being that all efforts should be made to ensure an EMIS’s full potential as a tool for teaching and learning as well as compliance be realized (p. xvi). There is no real engagement with the underlying causes of these challenges, and context is presented as a barrier to be overcome instead of a fundamental component of EMIS implementation, representing the aspirations and goals of local actors.

Various WB country analysis and reports also examine the EMIS’s ability to be “credible, operational in planning and policy dialogue as well as teaching and learning” (Abdul-Hamid, 2014) using the SABER-EMIS framework to assess an EMIS’s ‘development’. These potentially offer the chance to examine the specific conditions that could be the cause of the challenges. The data reveals that EMIS projects in countries such as Timor-Leste, Ghana, Tanzania and Azerbaijan reportedly improved data collection but failed to improve data use, due to a lack of data ‘awareness’ as well as relevant knowledge and skills – human and organizational capacity—to collate, analyze and/or disseminate the data quickly and accurately (Abdul-Hamid et al., 2017; Saraogi et al., 2017b). Island nations such as Fiji, Papua New Guinea, Samoa, and the Solomon Islands – often grouped together as developing Pacific nations with “similar contexts” (E4) and therefore comparable challenges (Saraogi et al., 2017a)—have, with varying degrees of ‘success’ also collected EMIS data, but in all of them utility has been deemed weak15. In these island nations, usage is ‘limited’ to the “central level to improve the education system business processes” (Saraogi et al., 2017a) such as allocation of school grants; data access is restricted to people outside the Ministry; and there is little engagement with disseminated data reportedly due to a poor data culture – as well as intermittent internet in the case of Fiji, Papua New Guinea, and the Solomon Islands16 (Mintz and Saraogi, 2015a, 2015b; Mintz et al., 2015; Saraogi et al., 2017a).

Such analysis again seems to suggest contextual challenges to the implementation and utility of EMIS, but neither the analysis nor the subsequent recommendations meaningfully engage with the context as anything more than an environment in which an EMIS project with predetermined goals and objectives exists. The same critique can be levelled at the analysis of the reasons for the continued limited usage of EMIS data. In fact, beyond acknowledging that certain contexts have challenges in terms of technology, financial and human resources, there is still no deeper analysis made into why these issues exist or persist.

This overview suggests that the utility of EMIS data is negatively influenced by the WB’s instrumental conception of context simply as an “enabling environment”17 – merely a foundation for an effective EMIS (Abdul-Hamid, 2014, p. 39). This conception limits an analysis of what-went-wrong to an examination of the policy areas and levers involved. But a deeper look at the Portfolio Review and the country reports with and through the lens of the interviews reveals three profoundly contextual mechanisms operating within the structure of the ‘development project’, underlying the what-went-wrong analysis above. The first is a Eurocentric notion of the generalizability of EMIS, “a façade of universality” (Steiner-Khamsi, 2013, p. 21), in which the context of developing nations is denied its significance, seen as just a variable instead of an essential ingredient of design and application, and therefore, of the success of EMIS. The second is an enduring colonial mindset that considers the educational data of development contexts as yet another resource whose value can be extracted and used. And finally at its deepest level, resulting from these two mechanisms and simultaneously generating them, is a devaluation of people from such contexts- as entities with agency and rights – in favor of data. I delve into each of these causal mechanisms as to why the data “dies” (E2) in the sections below.

3.2 The façade of universality

Though EMIS frameworks, working papers and country reports do recognize country context as an important factor, and each setting unique, they simultaneously state that examples of “what works” from different countries model successful promotion and implementation for other nations and that all EMIS can all be benchmarked against the same set of practices. The underlying premise is that the “functionality of an EMIS is universal across contexts because data is the core of its operations [emphasis mine]” (Abdul-Hamid, 2014, p. 19), and though it may “look” different based on context, this is a superficial difference in “how data is collected and processed,” usually due to the collection tools and “available technologies” (p.19–20).

This notion of “universality” lies at the heart of the World Bank’s rationale for the SABER framework for an effective EMIS, which mainly draws on examples from the US, the UK, and Australia (Abdul-Hamid, 2014). It also underpins the subsequent recommendations and the development of benchmarking tools (e.g., SABER-EMIS framework) to serve as diagnostic instruments and foundations for the design of new EMIS projects. These are based on “standards of good practice” (Abdul-Hamid, 2014, pp.9–10) taken from the practices of educational systems that perform well on international tests—mainly members of the OECD (World Bank, 2011)—as examples of the ‘advanced’ practices all countries should attain (Klees et al., 2020). The result is a ‘decontextualized’ set of EMIS practices and a linear “latent-to-advanced” ranking of EMIS based on the various indicators of practices coming from the Global North that gloss over contextual differences and cultural specificities, failing to acknowledge the challenges and distinct needs of the context as anything more than barriers.

This ‘façade of universality’ is why most IOs apply a one-size-fits-all approach to EMIS, promoting and implementing EMIS in ‘developing’ contexts in the same manner they would in North American and Western European countries. Such a mechanism not only contradicts the need to pay attention to context but also operates on the assumption that such systems will function in the same manner and provide the same results in Global South settings as they do in the North. Developing contexts are usually underfunded and reliant on donor aid, lack the institutional and technological structures and capacity to support the successful collection, analysis, and usage of EMIS data, and require additional considerations to align an EMIS with their own specific characteristics and needs rather than in response to global concerns (UNESCO and Global Partnership for Education, 2020).

Paradoxically, maintaining such a façade also means that even when countries implement a successful EMIS that is specifically designed for their context, it is benchmarked against the same SABER-EMIS indicators of best practices—and found wanting for not fully conforming to these “standards”. A good example of this is Fiji’s EMIS (FEMIS) which was a low-cost education data system in terms of both development and implementation specifically designed for their context (Saraogi et al., 2017a; Abdul-Hamid, 2017). Fiji developed an EMIS by integrating its existing databases and relying on local capacities and resources to keep the system sustainable. This approach included maintaining a parallel paper system due to internet limitations and abandoning the census-based model used by most countries as the backbone of EMIS.18. However, because data utilization generally remains internal and localized at the central level—meaning the system does not comply with a preconceived notion of the utility of EMIS in terms of decision making and access to external stakeholders—FEMIS is not considered an “advanced” system (Saraogi et al., 2017a; Abdul-Hamid, 2017).

A focus on “what works” makes it difficult to promote or implement EMIS in a way that is sensitive or relevant to a country’s own contextualized settings because it formulates the (local) problem in a way that aligns with an already existing (global) solution (Steiner-Khamsi, 2013, p.28). The Review, for example, acknowledges that “EMIS vary dramatically across countries. In fact, each country is different, and within a country, often additional variation is seen at the local school system level” (Abdul-Hamid et al., 2017, p.1). However, context is not recognized as an essential element in the success or failure of an EMIS, instead, context variations are simply viewed as challenges to implementation noting issues with: “the political framework in which the EMIS functions, technological means, and differences in the culture around data and types of data utilization” (p.1). Again, Fiji’s EMIS can serve as a case in point. Although FEMIS was designed based on the context’s particular capacities and needs, the WB still perceives it as a “potential model for other countries in the region” (Saraogi et al., 2017a, p. 7) encouraging other “island” nations to follow its example where it does conform since they have “similar context” (Abdul-Hamid, 2017; p. 234). Underlying such an analysis is a preconceived notion of what an “advanced” EMIS should look like and against which every EMIS is measured, based on ‘evidence’ of what worked (as in the case of the island nations mentioned above). From this perspective, context, with all its elements, is just a variable in the implementation of this ideal EMIS rather than a determinant factor.

3.2.1 One-size-does not fit-all

Operating under this one-size-fits-all approach, the system and the way the “data is being used” remains “donor-driven” in most development contexts as Expert 4 notes, with IOs19 “build[ing] super heavy, very cumbersome expensive EMIS systems which do not work in the context. People do not know how to use it…[or] it does not work.” This sentiment is echoed by two other expert informants, who note that instead of examining the context, country reports often focus on IT considerations or the political basis of implementation, noting ‘issues’ with data culture and data utilization as if these can be separate from the context. The tendency to focus on the superficial systems and procedures for data collection (e.g., technical issues associated with information and communication technology and infrastructure) but overlook the institutional and human resource capacities of the context is, for these experts, one of the symptoms of this approach.

True, challenges in terms of technological means may exist because of the limitations of the context (Abdul-Hamid et al., 2017; Iyengar et al., 2016), but these are often the least problematic factor because investing in hardware or software is “an easy fix” according to Expert 4. What this fails to address is the issue of the contextual suitability of the EMIS and the existence of the capacity to use these resources: “… the systems are built without really knowing the context… now the question becomes, how do you end up using it?” (E4). Nor is the issue an ignorance of the need for the capacities—the current approach recognizes the need for them but assumes that “we will build the system and then build their capacity” (E4) though they know it never happens. Ultimately, it seems to be an underestimation of how central such capacities and considerations of the context are to the successful implementation of such a system from the get-go: “… really seeing what the system, what the society needs… given the context, given the capacity, what kind of systems will work in that particular context?” (E4) and then building systems based on their existing capacity as well as their goals and aspirations.

Another symptom of this approach, based on the insights from the informants, is a shallow grasp of and engagement with the political environment specific to a context. The political framework in which an EMIS exists matters, but an individualized, contextual approach goes beyond simply adjusting to the politics, working within the institutional structures, or gaining an understanding of them. It is not finding that “…best practice [that] can be taken from everywhere, but you need to customize it….” (E1) or finding similarities in context that seem to suggest a similar mode of implementation might work. According to Expert 4 “…even if you can talk about similar context… every people, context is very different, and it’s more about what they [want to] learn and how they want to implement”.

Three of the four expert informants have suggested, in fact, that considering context means resisting the inclination to “copy-paste” a system that has worked elsewhere and “embracing [and investing in] the local structures, the local context… everything that falls under that context umbrella” (E2) because individualized approaches are the only way forward. Such approaches involve the cocreation of a system from the ground up with the people in that context, one designed for them and by them, respectful of their particular capacities, priorities, goals, and needs, starting from: “…what are the questions that you want to answer?…or things that you want to change” (E3) all the way to what technology suits them best. Ultimately, it involves considering both the means and the ends of EMIS implementation outside the parameters of efficacy, i.e., what the EMIS should work for – in terms of what that society perceives as acceptable and desirable—and who has the ultimate say in defining the latter (Biesta, 2007).

The one-size-fits-all approach used to implement EMIS stems from a misinterpretation of the relationship between research and practice, one that relies on “evidence” of “what worked” to provide rules for future action as the “most effective means to bring about predetermined ends” (Biesta, 2007, p.18). This, however, is contrary to the fact that the knowledge derived from research and practice does not predict future effectiveness but reflects past successes and thus can only serve as a tool for making informed choices about possible lines of action (Biesta, 2007). There is value in learning from other systems or practices, the expert informants observed, as “examples of where it’s actually worked” (E4), but there must be a recognition that an ideal EMIS does not exist, nor does a “best practice”, and that ‘what works’ there, may not work here (Alexander, 2010).

However, the real danger of this ‘façade of universality’ extends beyond the failure to recognize local contexts. Far from ‘empowering’ developing countries with data, adopting these ill-fitting systems has dictated how EMIS function, what data is considered relevant, and the uses to which this data is put. In the next section, I will discuss how the adoption of these systems, coupled with an enduring colonial mindset, may instead enable powerful, data-rich countries, researchers, and other organizations in the Global North to undercut the real, impactful capacity development in the South (Dahmm and Moultrie, 2021), resulting in data flowing only one way: out.

3.3 An enduring colonial mindset

A major challenge to the utility of EMIS in development context, and one identified by the UNESCO Institute for Statistics (2017b), is ensuring that it provides evidence for better decision making domestically as well as internationally. According to UIS, part of the challenge is that every country’s government has an international commitment to report on certain SDG 4 indicators, but “many (in reality just about all) require the use of databases that go beyond the EMIS backbone of data” (Van Wyk and Crouch, 2020, p. 17) into datasets coming from diverse sources such as household surveys, detailed poverty maps and the like. Their guidelines on EMIS use go on to explain that this does not mean that EMIS guidelines are a “tool on how to serve donors and international agency requirements” because “country needs are paramount” (p.17). My informants overwhelmingly agreed, with Informant 3 explaining that “there’s value that you can generate from international comparisons, but that’s not the purpose of an EMIS, …EMIS is really to inform national policy [emphasis mine].” But is that an accurate description for what happens in practice? Not according to Expert Informant 4, who explains that the majority of policy documents on EMIS often state that “the main purpose of EMIS is compliance, to be able to report on key SDG indicators.” A 2018 survey of national statistical office (NSO) representatives from 140 LMICs similarly revealed that the NSOs ranked international development partners as the most important users of data, well above domestic users, and thus their needs and priorities took precedence over national concerns (Prakash and Sethi, 2018).

Moreover, most international initiatives on statistical capacity building have focused on meeting the demands of global reporting and accountability pressures, concerned with improving the availability of indicators or filling gaps in data series. The result is that statistics are not recognized as a national governance issue and are not used to achieve national development goals (UNESCO Institute for Statistics, 2017b) – EMIS data has little value domestically. When the implementation glosses over context, and the support and funding provided for EMIS is driven by a global demand for monitoring progress rather than country needs, this can detract from the local demand for data, limit its use by national stakeholders, and even distort national development goals (UNESCO Institute for Statistics, 2017b). Often, the needs of the international community tend to “[crowd] out” national needs” (E3). But is this aspect of the failure of utility an unfortunate, but incidental, byproduct of the way these ill-fitting systems have been promoted and implemented? Or is this “crowding out” of national needs deliberate?

Consider the type and scope of data EMIS currently gathered, which seems to serve international rather than national needs. Nationally, a great deal of data is needed for a successful national education system (Abdul-Hamid, 2017; Subosa and West, 2018; Tolley and Shulruf, 2009) because “[g]overnments need to manage the services that they provide to the country, to their people, …” and from that perspective, “an EMIS would be a whole set of different types of information that would allow the government to spend resources the most efficiently possible … “(E2). An EMIS typically includes data on all relevant educational components, covering accurate and detailed administrative data, but also relevant data sources providing information on finances, human resources, and so on that are directly aligned with national education goals (Abdul-Hamid, 2017). This is the data that can be of value to classroom instruction and support at schools (Makwati et al., 2003), aiding in the formulation of national educational policy in relation to issues of access, coverage, equity, efficiency, and relevance – in order to provide a quality education (Makwati et al., 2003; Abdul-Hamid, 2014). What is needed “…at national level to monitor or to implement or to plan for an education system is very large and wide,” Informant 2 explains, “…it goes well beyond what we need at international level to monitor the SDGs”.

SDG 4 has specific targets and indicators that require specific data sets (Van Wyk and Crouch, 2020; UNESCO Institute for Statistics, 2020), so “[a]t the international level …for monitoring SDGs…we only need a set of data that is collected through the UN Education Survey” (E2). Yet, to monitor the progress towards the education SDG (SDG 4), a nation is often required to align with SDG 4 at a country level, recognizing the correlation of various development factors (Subosa and West, 2018). Data for this purpose is drawn from the educational system as well as “sources outside formal educational institutions, such as household surveys, labor market information systems, and health information systems” to shed light on the ways education is (and is not) supporting the global agenda and how it might do so more effectively (Subosa and West, 2018, p.5). This seems to a be a deliberate widening of the purview of an EMIS, connecting educational data to other longitudinal data sources, which broadens the scope of the data feeding into it while crowding out national needs. The informants have, in fact, noted that there are constant demands from “every donor …saying, we need more data, better data” (E2); the question, Informant 2 asks, is “…But what for?”. And the answer seems to be the extraction of data.

3.3.1 The one-way pipeline of educational data

To consider the notion of extraction within an EMIS in a development context, it is important to recognize the power dynamics embedded in information flows and knowledge structures. Global knowledge production20 continues to be asymmetrical, with the Global North being historically the more dominant region in terms of both production of and access to knowledge (Ricaurte, 2019). Additionally, digital technology – in terms of hardware, software and infrastructure—as well as the technical capacities needed to ‘manage’ the data are still centered in the Global North (Gorur, 2022; UNESCO and Global Partnership for Education, 2020), where most (if not all) the international organizations and funding entities discussed are located.

The informants’ insights suggest that the data practices around EMIS reinforce these power dynamics with information flows remaining pretty much unaltered. They acknowledge that the usage of EMIS data can be based on the logic of lesson learning and discovering ‘best practices’ from various countries to help the international community “[understand] the advantages and disadvantages of different systems (E3).” However, the aim is not simply to ‘learn’ what practices may be useful to adopt or consider (practices already critiqued above), but rather to see what could be learned from other systems to better position oneself in a competitive global economy: “[E]ducation is competing in a way…[and] it’s positioning [a] country to compete … especially wealthy countries [who] do things the way they have always done them by traditions and so on…” (E3).

As two of the informants have noted, data has become a way to monitor the efficiency of national education systems, benchmarking them against other systems. Data is used to benchmark (1) expenditure, or “value for money”, to “see how other countries transform … whatever amount of money that is into learning” and by extension “… are we getting value for our system or is our system working [emphasis mine]” (E3); and (2) the effect of human capital investment on economic growth because “education systems power the competition that leads [to] trade, global competition where companies decide to invest in countries … [and] all kinds of bigger economic policy issues” (E3). This suggests that EMIS projects and development aid are far from altruistic ventures, instead, these funds are “…increasingly being linked to the trade office, instead of a standalone overseas development or …foreign policy, its trade policy” (E3) as a way to engineer the creation of skill pools to attract national and international investors within a global labor market as nations compete for the most valuable human capital.

Three of the four informants also described more directly extractive practices they noted amongst some IOs and the private companies connected to these development agencies. Multiple EMIS software platforms exist within the development context, developed and supported by multiple IOs, but often involving partners from the private sector (UNESCO and Global Partnership for Education, 2020). These private companies, through their connections within the international community, are able to leverage their technology, resources, or technical capacity to extract data from national contexts for their own purposes and undermine a country’s capacities to collect and use their own education data. This can include software companies who host and maintain the EMIS servers and database selling the data to third parties (Abdul-Hamid et al., 2017). Expert Informant 2 describes a specific instance (as an example among many), where such agencies, “private millionaire, billionaire agencies,” came into a development context and “… basically collected school location data, kept a copy for themselves, gave a copy to the ministry, thinking they did well.” When, in fact, the ministry has been dispossessed and disempowered by being unable to learn how to collect and use that data for themselves.

Two of the informants also highlighted the degree to which some development agencies are involved in such practices. They explained that some IOs not only bring in private consultants with extractive practices but also allow these consultants to make proprietary claims to EMIS software and hardware, effectively withholding access to the EMIS as well as the capacity and skills to use it. In some cases, such consultants and development agencies have gone so far as to claim that any EMIS they helped develop and provided software for “belonged to them, not to the country” (E3) in several of the contexts they supported. This forces a government “… to go back to this consultant or private company and pay for the services which they should have skills [for] in-house” (E3), or worse have to purchase their own data back from the private company ‘holding’ it on their systems in order to migrate it to another system (Abdul-Hamid, 2017). Such practices further extract value and disempower the very governments these IOs are supposed to support.

Another extractive practice, and one that hits closer to home, is the employment of development data for research purposes, where “a caste of researchers that are supported by Western donors …[put] pressure …on governments to produce… more granular data so that [they] can use it to publish their own research” (E2). Researchers benefit from the extraction of this “data-wealth” (Kohnke and Foung, 2024, p.6) to generate insights and knowledge – whether for their personal advancement or the “greater good”—with little consideration of whether or how the data will feed back into the communities from which it was collected. Instead, “it’s very documented that …research findings published in peer-reviewed journals [are] relatively inaccessible to actual planners and managers in countries whose data is being used to produce these research findings” (E2). This results in a “data pipeline” that is “basically one-way,” where “there’s really very little… opportunity for people in governments to actually use the evidence that is produced based on their own data…” (E2).

These extractive data practices help the Global North maintain its epistemic dominance as the increasing quantification of education continues to yield new insights and shape perceptions about teachers, students, and learning. In fact, the organizations who possess the means to aid this quantification (originating in the Global North) have used the power of data to ‘construct’ what ‘counts’ as a quality education, a good student or an effective teacher and then leverage these constructs socially, culturally and economically (Prinsloo, 2020).

3.3.2 Data colonialism

The extraction and appropriation of data in such a manner is, for Couldry and Mejias (2019), a new form of colonialism: data colonialism, which combines “the predatory extractive practices of historical colonialism with the abstract quantification methods of computing” (p. 337). This form of colonialism involves the extraction of data about human relations and activities (such as education) as well as human-object and object-object interactions (such as data from/about infrastructure, natural resources, and energy) (Couldry and Mejias, 2019). This is followed by a calculated extraction of value through data (Couldry and Mejias, 2019) to generate opportunities for economic and societal “value creation” (World Economic Forum, 2011). Such data—especially “personal data,21” seems to have become the “new ‘oil’… a valuable resource of the 21st century. a new type of raw material that’s on par with capital and labor” [World Economic Forum (WEF), 2011, pp. 5–7]. However, like any other resource, it is one that extractive rationalities have sought to portray as ‘just there’ – natural, abundant, easy to appropriate from its rightful owners, and unproblematic to extract (Couldry and Mejias, 2019). This resonates with my informants’ perspectives, all of whom have highlighted the value of EMIS data, describing it as “sitting on a pot of gold” (E2), but one whose value—at least for the Global South—remains beyond reach.

In their metaphor of data as a “natural resource”, Couldry and Mejias (2019) contend that just as natural resources cannot be used in their raw form, neither can data. It needs to be configured for “capture,” a process by which daily life (or from this article’s perspective, the components and day-to-day processes of an educational system) is “reconfigured and represented in a form that enables its capture as data” ostensibly to better understand it, manage it and use it (Ricaurte, 2019, p. 339). Digital platforms,22 such as EMIS, are mechanisms of ‘social quantification’ that allow the production of data as the “social for capital, that is, a form of ‘social’ that is ready for appropriation and exploitation for value as data, when combined with other data similarly appropriated [emphasis mine]” (Couldry and Mejias, 2019, p.338). Such logic invites us to (re)consider the implementation of EMIS as a mechanism of ‘quantifying’ education, a system able to transform the various components and processes of education into extractable (and comparable) data that now has value, providing knowledge about an education system and the learning outcomes within a country.

The rationale for the implementation of EMIS is that the large body of data needed for data-driven decision making in and on education can only be useful—nationally and internationally—if it is in the ‘right’ form: quantitative data, purposefully and methodically collected, organized, and analyzed so that it can be reflected upon in a timely manner (Abdul-Hamid, 2014). This type of reliable, relevant, and easily accessible data is produced through a data value chain. In this value chain, various EMIS activities and processes add value as they transform the raw data into ‘evidence.’ As the data is collected, processed and analyzed, various processes ensure that that the data is (1) accurate and reliable, (2) in a usable format, (3) organized for easy retrieval, and (4) analyzed to produce actionable insights to inform decision making (Faroukhi et al., 2020).

Tracking the data related to the indicators of the global goals methodically and making them easily retrievable also requires similar activities and processes (Abdul-Hamid, 2014; Subosa and West, 2018). In fact, without EMIS, national educational data does not feed into other regional and international databases in a “systematic, comprehensive, integrated, and well-presented manner” (Abdul-Hamid, 2014, p. 15). So, the crux of the international community’s argument for the implementation of EMIS boils down to needing a system to manage “what data [is] collected and, most importantly, how that data can be turned into useful information and knowledge” (Tolley and Shulruf, 2009, p.1199). This rationale combined with the crowding out of national needs and extractive practices common to development context provides a disturbing backdrop to the investment of IOs in EMIS. Such an investment, from the perspective of data colonialism, is the calculated establishment of a tool that enables the extraction of the value of data by transforming it into a usable format.

These discourses and practices, in CR terms, impact the way EMIS and educational data are valued and used in specific contexts to “restrict or enable different sets of actors and the actions they take” (Dale, 2015, p.344–5), facilitating an exercise of power in which resource-and knowledge-rich actors are able to advance their strategic interests while creating substantial barriers for those lacking similar resources to achieve their strategic aims (Dale, 2015). Even if inadvertent, such dynamics reflect historical patterns of colonial exploitation and extraction (Muyoya et al., 2022). These are patterns in which the people behind that data are, at best, a secondary concern, and in which their devaluation simultaneously results from and continues to generate such practices as I will discuss in the section below.

3.4 A devaluation of people

Much like Couldry and Mejias (2019), Ricaurte (2019) envisions data-driven or evidence-based ‘rationality,’ as “an infrastructure of knowledge production developed by entities situated mainly in the Global North and an economic system that supports capital accumulation and economic growth” in which our digital ‘selves’ are quantified and commodified (p. 351). In this paradigm, educational systems, and the complex cultural, moral, and political problems that are inherently embedded in them as expressions of the history and the context they exist in, are “reduced to problems that can be measured and quantified. They are matters that can be ‘fixed’” (Birhane, 2020, p.397) if we just have enough data.

Education systems are diminished to the “demographics of the students” (E1) and the general demographics of schools: data about “…the teachers, the curriculum, the learning spaces, the method, and the support system” (E1), and data from assessments and measurements “to capture their learning… the well-being of the of the children, [and] what are the different issues that can affect their learning… in order to match them with the needs” (E1). Other aspects of people’s lives also need to be quantified: “…information about their homes, about their parents, … in order to get the big picture” (E1). The dynamic and interactive human activities and processes that are constitutive elements of an educational system are simplified to whatever the international community, data engineers, and tech corporations think they mean (Birhane, 2020), reducing people to numbers on a EMIS dashboard. Or worse, conflating people numbers with financial numbers: “…because most of the time the funding [is] associated with the number of students and for that, accurate statistics are very important …either for general budget or government budget allocation, or from donors [in the case of fragile context]” (E1). From this perspective, data-driven decision making can be seen as an expression of the coloniality of power (Quijano, 2000), “manifested as the violent imposition of ways of being, thinking, and feeling that leads to the expulsion of human beings from the social order” (Ricaurte, 2019, p. 351)—replacing people with their data.

3.4.1 Governance by/through numbers

The appeal of data-driven decision making relies on a perception of data as “neutral, democratic, and secular” (Gorur et al., 2023, para.7), presenting data or e-governance as an effective method of efficient and fair governance. By relying on ‘facts’ and reducing the influence of subjectivity and ideology in policymaking (Parkhurst, 2017), data has become the ‘best’ means to overcome issues of inefficiency and corruption in bureaucracy and to improve the quality and equity of learning (Gorur et al., 2023) decentering people as key agents in the decision-making process.

This conception of data has implications for the type of data the global community values and requires as the ‘gold standard’ for evidence-based decision making: so-called “hard facts”, evidence based on randomized controlled trials (RCTs) (Nevo and Slonim-Nevo, 2011) and/or quantitative data that has been checked for accuracy, validity and reliability within strict quality parameters23 (UNESCO Institute for Statistics, 2017a). As Expert Informant 2 explained: “[Globally, we] tend to focus only on ‘hard’ data like numerical data, something that is very standardized… following strong methodologies… [such as] using RCTs… [we are] obsessed with actual numerical data” (E2).

It also devalues the type of data that education systems have relied on domestically for decades, according to Expert Informants 2 and 3: “qualitative evidence,” where data is tempered by a human perspective. This is where each local community “…know[s] what’s going on in their district …the pockets of exclusion… the hard-to-reach places…. this is sometimes based only on their experience… not coded into the database” (E2). Yet often, “… this is the type of information that is useful when you plan for an education system…historical, political, social developments that are not necessarily like numerical data in the census, [but] that are very helpful [and] are considered evidence at national level” (E2). However, both the global community and, in development contexts, the donors have a hard time accepting this as “data,” refusing to recognize that “the source of evidence that they countries [governments] have been using for a long time is valid” (E2).

This approach considers such qualitative data “unfounded opinion” (Biesta, 2007, p.4), valuing quantitative methods as ‘scientific, neutral, unbiased assessments’ that have a more generalizable nature. However, RCTs themselves lack generalizability since they do not take the context within which an intervention takes place into account or the difficulty of attributing an outcome to a particular cause in an educational system (Biesta, 2007). And data is not neutral; neither is evidence-based decision making or the systems that foster it. Data and data-driven technologies are “developed by human beings within a social and cultural context, values and world views can be embedded within them” (Muyoya et al., 2022, para.2).

Steiner-Khamsi (2013) argues, in fact, that ‘governance by numbers’ is no less political or more rational than other modes of regulation, asserting that evidence-based policy planning is inherently political; however, the political influence of data masquerades as scientific rationality, obscuring the fact that its recommendations for ‘best practices’ often stem from power and ideology, not evidence (Klees et al., 2020; Ozga, 2009). Moreover, “tech-mediated” decision making, discounts how pivotal qualitative, nuanced, contextual information is to understanding and supporting the most marginalized and disadvantaged (Gorur et al., 2023) though it is not easily translated or extractable into a “usable format” that can be stored on an EMIS.

Yet the idea of data as value neutral ‘evidence’ is so pervasive that quantitative data – such as that produced by EMIS – has become a tool of governance on both the national and international levels, used as a method of performance management, for financial accountability and to support further funding mechanisms nationally and internationally by conveying the country’s progress in global initiative commitments (Tolley and Shulruf, 2009). My informants all expressed the notion that funding has increasingly become bound to the ability to ‘show results,’ with governments and IOs only providing funds when and if performance targets are achieved. Expert Informant 4 reflected that EMIS’s purpose for “…the international community, generally donors[is] … to be able to track progress on how their investments are faring”.

Domestically, funding has become similarly bound to this notion of evidence, this ability to provide ‘results’: “…if you cannot show your outcomes, how you have improved your outcomes, or the good things that you are doing… then the health ministry can, and they’ll get the funding…” (E3). Informant 3 also explained that “… having some really good direct measures of outcomes in health has changed the financing for public services quite a bit,” going on to indicate that “clearly that’s the way the world has worked in the last 15–20 years” (E3). Such evidence-based financing (a deeply impactful form of decision making) not only draws a false equivalence between how education and health systems work,24 but also shows how this narrow idea of ‘evidence’ significantly shapes and drives particular behaviors and actions in terms of decision making and governance, which are often determined by the Global North.

3.4.2 (Re)humanizing the numbers

This push for e-governance seeks to establish numbers as the main agents in decision making, suggesting that the “prevalent decision-making culture,” which generally relies on “the intuition of senior managers” to make important decisions, is a major internal obstacle to “progress” in both companies and public institutions because “executive instinct is challenged by the facts of hard data” (Bilbao-Osorio et al., 2014, p.47). Similar arguments have been made to promote the implementation of EMIS. Interestingly, my interview data revealed that the existence of such mechanisms and structures does not negate human agency which, from a CR perspective, is the power of human actors to influence these structures and mechanisms (de Souza, 2014). So, how much “hard” data do policymakers actually use in their decision-making processes nationally and internationally? How impactful is data on their policy making? According to Expert Informant 3, very little:

… [when] UNESCO was in Paris, the Statistical Office was not next to the Education Policy Office, it was in the basement with catering and the janitors, …. if it was important, they would have it right next to them, … they would be asking questions, and they would be saying we need this [and] do this. And it just wasn’t the case… that was a nice analogy [for] what I see in countries… (E3)

They go on to explain that this observation is not limited to education policy: “…you find this in other sectors as well …in the National Statistical Office or the Finance Minister again, not using data, but just using their envelope and doing historical budgeting” or using “…a shortcut, so they do not have to look at the data even [emphasis mine]” (E3). Policy makers are amongst the groups that traditionally require the least amount of data “… [the general] public needs the least [information]. But decision makers are down there with the general public. They just need the big story” (E3).25

These insights are not meant to portray inefficiency, though clearly policymakers find it easier to perpetuate norms or pursue the path of least resistance, but to highlight that for decision makers, numbers are not enough: “where I’ve seen ministers make changes …clearly it wasn’t just the numbers that compelled them….” (E3) because “… the numbers are dehumanizing in a lot of ways… people want the human as part of it” (E3). This “human” part is in stories, which bring “… more emotional immediacy than just the numbers alone or some kind of … statistical [sic, statistics]” (E3).

However, the persistent narrative that ‘hard’ data is the only source of neutral, reliable, valid ‘evidence’ has resulted in a tension between the need for numbers and the acknowledgment of the decision makers’ humanity and that of the lives their decisions affect. Consequently, policy makers do not have a great deal of confidence in qualitative evidence, so they are willing to hear these “stories,” but they need qualitative evidence to give them “…some confidence in knowing that [if] they are going to change the national policy, that they have a good representation of what’s going on in the country” (E3). Informant 3 addresses this tension when they explain that “[if you are] trying to influence the finance minister to spend more money.…there really is space for narratives and for stories [referring to qualitative data showcasing people] that then link up with the data” (E3). Such perspectives speak to the real agency of people and their experiences in continuing to shape decisions despite the push to establish numbers as the primary agents in decision making.

3.4.3 (Re)centering people as sources of data

A final consideration in how e-governance devaluates people in favor of their data is the way it sidesteps any real acknowledgement of their agency and rights as the sources of the data in the discussions on EMIS implementation and usage. This consideration is based on the data’s silence with regards to consent, ownership of personal data and individual rights to privacy and confidentiality particularly at the data collection stage. Although there is an acknowledgement of the highly sensitive nature of educational data and the need to secure it, the main focus is on usage: the security of data access and storage to prevent data theft or breaches as well as ensuring authorized use and disclosure in compliance with national privacy and confidentiality laws and policies (Abdul-Hamid, 2014; Abdul-Hamid, 2017).

Concerns regarding the underlying act of data collection, such as concerns with personal privacy, consent or data ownership, are generally bypassed as if the collection of personal data is a fait accompli. The SABER framework for an effective EMIS (Abdul-Hamid, 2014), for example, refers to the legal frameworks of data confidentiality to ensure authorized usage, secure access and storage, but it does not address consent or individual data ownership, focusing instead on freedom of information laws providing increasing access to education-related information. Similarly, in the Review, EMIS data security is the main concern, with privacy and confidentiality issues only explicitly discussed in the context of the implementation of a health information system involving the private health records of citizens.

Even when a document considers individual agency and rights, such concerns are given minimal consideration and seem to have little significance to implementation. For example, in Data for Learning26 (Abdul-Hamid, 2017), data security to maintain the confidentiality and privacy of collected data is thoroughly discussed, but consent to the collection of data is mentioned only once—with a caveat. Individual consent is considered within a section on data protection measures, where under the data collection limitation principle, these measures specify that the collection of educational data, especially if linked to other national public services such as health, social security and so on, should “follow lawful and fair means and, where appropriate, include the consent of education stakeholders [emphasis mine]” (p.146). Individual rights to understand the need for, to access, to change or to delete personal data are also briefly discussed within these data protection measures (under the individual participation principle); however, these measures do not provide the choice of opting out and rely on national governments to guarantee and implement them (Abdul-Hamid, 2017).

Even the country reports sidestep these issues. They do acknowledge concerns with data security and privacy once the data is collected [though they rely on the availability of national data policies or guidelines27 for confidentiality and “privacy assurances” (Saraogi et al., 2017a, p. 8)], but they do not consider any additional measures to address consent, individual privacy or data ownership at the data collection stage. Moreover, even when policy gaps in the areas of privacy and confidentiality are highlighted in these reports as detrimental to the ‘success’ of EMIS implementation (Mintz and Saraogi, 2015a; Mintz et al., 2015), the main concern remains securing the privacy and confidentiality of the data collected for usage in order to garner and/or maintain public confidence and support for the continuation of the project (Abdul-Hamid, 2017).

None of the expert informants mentioned consent as a factor in implementing EMIS or collecting educational data. A single expert informant engaged with the idea of data privacy only to dismiss it as a concern, explaining that: “Oftentimes countries are hesitant to have that granular data…. but what gets submitted to EMIS at the national level [is] very consolidated aggregated numbers [emphasis mine]” (E4). This assessment is in opposition to most descriptions of an ‘ideal’ EMIS, which indicate that it should also have the feasibility of data disaggregation down to the student level (Montoya, 2018), in addition to incorporating even more personal data from diverse sources including financial, health and social services records28 (Abdul-Hamid, 2017).

These observations reveal that the ideas of ownership, consent and individual privacy concerns do not seem to be serious considerations when it comes to educational data29. The UN has even presented the argument that personal data collected via digital platforms is of no real value since it is “merely” the “exhaust” exuded by people’s lives, and so not capable of being owned by anyone (Letouzé, 2012, p. 9). Instead, this data is viewed as an accessible resource that is “just there”—disconnected from the people whose data it is- denying them any real rights and agency in its existence or usage.

This obfuscation of the extractive nature of the process of data collection through EMIS corresponds to the logic of data colonialism. It involves the appropriation of the value of data by pretending that this collected data is ‘raw material’ with natural value (Couldry and Mejias, 2019), enabling the international community to disregard both the context and the people behind the data. Such mechanisms allow the Global North to assert that such data is also ‘free’ to take and appropriate (Kohnke and Foung, 2024), an ideology similar to the colonial concept of terra nullis or no man’s land (Couldry and Mejias, 2019). It is this mindset and its ideologies that are “reminiscent of the colonizer attitude that declares humans [and their data] as raw material free for the taking” (Birhane, 2020, p. 398).

4 Conclusion

In this article, I have endeavored to critically examine the drive for data generation and the building of ‘better’ data infrastructure in development contexts, embodied by the implementation of EMIS, as part of a strategic approach to improve a government’s efficiency, prevent the wastage of resources and provide better value for money (Tennant and Clayton, 2010). Central to the discussion was how instrumental this focus on ‘what works’ was in the diffusion of EMIS, especially by the World Bank, while noting that this attitude towards increased effectiveness, efficiency, and accountability as a rationale for data generation has clearly not extended to the national use of educational data. IOs and governments have made substantial investments in EMIS data, but their efforts have had minimal returns nationally. Despite that, there are continuous demands for data from all parts of society (United Nations, 2014), without considering the “…waste of resources, of time, of anything that you can imagine” (E2) if that data continues to go unused, particularly for its main purpose. In grappling with potential explanations of why ‘what works’ did not work, and despite the limited scope of the data, this article reveals three mechanisms that underpin these empirical manifestations of EMIS and data usage by restricting and enabling certain actors, actions and events.

The first mechanism is the notion of “universality,” which promotes zeroing in on a single set of ‘universally true’ parameters for an EMIS or ‘universally effective’ programs for its successful implementation. Such an approach seems far from sound practice, given differences in geographic context, time period, organizational or institutional settings, and the significant sociopolitical, cultural, and economic heterogeneity of most contexts (Pritchett and Sandefur, 2014). The second mechanism is a colonial mindset in which EMIS ‘quantify’ education to enable the Global North to transform and extract specific kinds of data. This perspective presents the implementation of EMIS as an extractive mechanism, a form of data colonialism. It also draws a parallel to the way the Global North imparts value to the data through EMIS and renounces it in the same way historic colonialism rationalized (and then normalized) resource appropriation and dispossession. This is done by claiming that personal data is a resource of such value that it is on par with capital and labor (World Economic Forum, 2011), while simultaneously contending that it is a resource whose value would have been missed until and unless the Global North refines and ‘appropriates’ it for some purposes (Couldry and Mejias, 2019). Such a renunciation of the inherent value of the data causes and is caused by a devaluation of people—the third mechanism—which decenters people as integral actors in the decision-making process and obscures their rights and agency as the source (and ultimate beneficiary) of that data.

Such an explanation positions EMIS as a part of a much broader process of data extraction, storage, processing, and analysis – a process that is crying out for an analysis through a decolonial lens (Ricaurte, 2019). It invites further consideration of the understandings that underpin these systems, given the investments in EMIS and the continued pursuit of educational data as part of the development agenda. Moreover, it invites further analysis as to whether the purposes and envisioned uses of such systems, derived from the experiences of the Global North and envisioned by them, truly serve the South. Equally important are considerations of whether and how such practices function as methods of data extraction and thus constitute new forms of colonialism that must be actively resisted (Gorur et al., 2023) because, if having data is like sitting on a pot of gold, then this article asks: Whose gold is it?

Data availability statement

The datasets presented in this article are not readily available to protect participant privacy and comply with ethical guidelines. Interview transcripts cannot be shared beyond the research team. Requests to access the datasets should be directed to bWFuYWwuZWxtYXpib3VoQGF1Y2tsYW5kLmFjLm56.

Ethics statement

The studies involving humans were approved by University of Auckland Human Participants Ethics Committee. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

ME: Methodology, Writing – original draft, Data curation, Investigation, Conceptualization, Writing – review & editing, Formal analysis. RS: Writing – review & editing, Conceptualization, Supervision. KL: Supervision, Writing – review & editing, Conceptualization.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Acknowledgments

We would like to extend our thanks to Dr. Susan Carter and Dr. Barbara Grant who have generously reviewed and provided comments on earlier drafts of this paper. Our sincere gratitude also goes to the interview participants without whom this work would not be possible. The authors take full responsibility for the final text.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.1616717/full#supplementary-material

Footnotes

1. ^For the purposes of this article, data-driven and evidence-based decision making are used interchangeably; while there are nuanced differences between the two terms, such as the broader scope of “evidence” which can include qualitative data and expert opinions, they share the core principle of using data to make “informed” and “objective” decisions.

2. ^The article focuses on EMIS use in development contexts, which are also referred to as low- and middle-income countries or the Global South. These terms are used interchangeably to describe countries that face economic and development challenges, often political instability, and which are mainly located in the Southern Hemisphere.

3. ^Other successful EMIS ‘activities’ include the development of an EMIS to manage teachers and provide access to education in Afghanistan (Abdul-Hamid et al., 2017); however, given the current turbulent political climate and the lack of clarity surrounding the current status of educational governance structures—including the operation of EMIS—I have chosen not to highlight it as an example in this article. I have similarly chosen not to include Chile—another success story often included in development discourse—because its upper-middle-income status sits outside this article’s primary focus on low- and lower-middle-income contexts (Abdul-Hamid, 2017).

4. ^In this paradigm, the empirical involves observable experiences and events; the actual pertains to unobserved but occurring experiences and events when powers in objects are activated; and the real refers to unobservable structures, causal powers and potential mechanisms that lie beneath those events and within those objects (de Souza, 2014).

5. ^This is not to suggest the process is linear; it is rather an iterative deepening of knowledge as we move amongst and between the strata (de Souza, 2014).

6. ^Although UNESCO has documentation on EMIS and this has been considered, the work of the World Bank on this portfolio is more extensive and more thoroughly reported.

7. ^Grey literature is literature that has not been peer reviewed and is produced by an entity where publishing is not a primary focus (Schöpfel and Farace, 2010).

8. ^The expert informants have experience in EMIS and/or educational data ranging from 6 to 30 years with three of them working at more than one multilateral organization over the course of their careers.

9. ^I feel it is important to acknowledge that I come from a development context, one that is donor dependent and formerly (some would argue still) colonized by a European nation. My background may indeed color my interpretation of data, but it also gives me a unique perspective into the issues. In RTA, this subjectivity is an analytic resource rather than a source of bias (Braun and Clarke, 2021).

10. ^Global North and the Global South are binaries that have been defined as north/south, enfranchised/disenfranchised, First World/Third World, but for the purposes of this article, Global North are “First World”; countries mainly North American and Western European countries and the Global South are ‘Third World’ countries in which—political, economic, and educational inequalities abound (Trefzer, 2014).

11. ^Abdul-Hamid et al. (2017) similarly report that between 1998 and 2014 the World Bank Education Portfolio included 415 activities in developing countries with 236 (57%) having an EMIS component.

12. ^Identified as the main issue in ten projects implemented in Albania, Argentina, Bulgaria, Bolivia, Colombia, India, the former Yugoslav Republic of Macedonia, Serbia, Sierra Leone, and Vietnam (Abdul-Hamid et al., 2017, p.31).

13. ^These recommendations were made in 2017 after nearly 20 years of EMIS project implementation by the World Bank.

14. ^Eighteen projects reportedly faced this issue: Brazil, Chad, Colombia, Costa Rica, Côte d’Ivoire, Ghana, Honduras, India, Kenya, Kosovo, Lesotho, FYR Macedonia, Nigeria, Pakistan, Sierra Leone, St. Kitts and Nevis, Tanzania, and Uruguay (Abdul-Hamid et al., 2017, p.37).

15. ^The SABER-EMIS framework has a linear scale which scores an EMIS’s “development” on four policy areas from latent to advanced (Abdul-Hamid, 2014).

16. ^Samoa publishes an annual statistics handbook and sends it to schools, avoiding internet challenges that may otherwise prevent access to the annual stats (Mintz and Saraogi, 2015a).

17. ^This “enabling environment” includes the legal framework, organizational structure, and institutionalized processes, human resources, infrastructural capacity, and budget of the system (Abdul-Hamid, 2014, p. 39).

18. ^They completely rejected the census as a way to collect pre-aggregated student and teacher data for use in Net Enrollment Rate (NER) and Gross Enrollment Rate (GER) to improve data quality (Saraogi et al., 2017a).

19. ^Although the expert informants often refer to the IOs as “development partners,” I’ve used IOs for the sake of consistency.

20. ^In practical terms, global knowledge production often occurs within universities, research centers, and corporations. These entities engage in activities like scientific research, technological advancements, and scholarly work to contribute to our collective understanding and drive ‘progress’. So, when we discuss asymmetries in global knowledge production, we are acknowledging that some regions (often in the Global North) have historically been more dominant in this process, while others may be underrepresented (Trefzer et al., 2014).

21. ^This report by the World Economic Forum (2011) defines personal data as data (and metadata) created by and about people, encompassing: volunteered data, observed data, and inferred data which would encompass many of the types of personal data commonly included in EMIS systems and census surveys around the world. They also mention that government agencies are using personal data to “deliver an array of services for health, education, welfare and law enforcement” (p.8).

22. ^To Couldry and Mejias (2019) these platforms include Amazon, Apple, Facebook, and Google in “the West,” and Baidu, Alibaba, and Tencent in China (p.34).

23. ^UNESCO Institute of Statistics (UIS) developed an instrument called the Education Data Quality Assessment Framework (Ed-DQAF), based on a data quality assessment methodology created by the International Monetary Fund (IMF), to evaluate the quality of information produced about education. The latest framework for the tool assesses six clear areas of data collection: (1) pre-requisites of quality; (2) integrity; (3) methodological soundness; (4) accuracy and reliability of data; (5) serviceability; and (6) availability.

24. ^This is a questionable homology that was made at the inception of evidence-based decision making in education (Biesta, 2007).

25. ^Expert Informant 3 based his description of the interaction of data and policy makers on Nzama et al.’s (2023) adaptation of the information pyramid as a tool for improved government decision- making processes, which identified different levels of data analysis from raw data to key indicators (p.7 – Figure 3). S/he also uses this pyramid to highlight where the data is needed “…there’s a technical layer of people that do require the data…And, if you are looking at cost efficiency or if you are looking at modeling …that kind of work that is very data-driven” (E3).

26. ^This is a 300 plus page title providing detailed guidance on building and sustaining an EMIS.

27. ^No legal frameworks to guarantee the confidentiality of respondents’ data or ensure they are used for the sole purpose of statistics exist in Samoa and Papua New Guinea, for example, yet EMIS has been implemented and the data is used regardless (Mintz and Saraogi, 2015a; Mintz et al., 2015).

28. ^The data collected by EMIS has become increasingly granular with the introduction of web-based individual student tracking systems and human resource management information systems. The rationale for these increasing demands on systems to collect disaggregated data and track individual children as they progress through the education system are meant to allow for an analysis of the complex socio-economic factors affecting a child’s progress through the education system, or exclusion from it (UNESCO and UNICEF, 2019).

29. ^Nor is this finding limited to development contexts. In their review of articles from impactful journals on educational technology, Kohnke and Foung (2024) found that educational entities are generally quite susceptible to data colonialism. They report that the implementation of educational technology usually produces “learning data [that] is commonly used to generate value (though not always profit) … for institutions” (p.2) in the form of improving engagement and outcomes internally or contributing to knowledge and research about educational quality or learning outcomes externally. However, users were not always aware that their data has been appropriated and in many cases their consent was not sought (Kohnke and Foung, 2024). This is especially true of student users, whom Kohnke and Foung (2024) described as the population with the least power in this system, the greatest potential for exploitation, and the least control over their data and its use.

References

Abdul-Hamid, H. (2014). “What matters most for education management information systems: A framework paper,” in Systems Approach for Better Education Results (SABER) working paper series. Washington, DC: World Bank Group.

Google Scholar

Abdul-Hamid, H. (2017). Data for learning: Building a smart education data system : The World Bank.

Google Scholar

Abdul-Hamid, H., Saraogi, N., and Mintz, S. (2017). Lessons learned from World Bank education management information system operations: Portfolio review, 1998—2014. Available online at: https://openknowledge.worldbank.org/bitstream/handle/10986/26330/9781464810565.pdf?sequence=2

Google Scholar

Bhaskar, R. (1975). A realist theory of science. Leeds: Leeds Books.

Google Scholar

Bhaskar, R. (2008). “A realist theory of science,’’ in Classical texts in critical realism. London: Routledge.

Google Scholar

Biesta, G. (2007). Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research. Educ. Theory 57, 1–22. doi: 10.1111/j.1741-5446.2006.00241.x

Crossref Full Text | Google Scholar

Bilbao-Osorio, B., Dutta, S., and Lanvin, B. (2014). The global information technology report 2014: Rewards and risks of big data. 1–369. Available online at: https://www.weforum.org/publications/global-information-technology-report-2014/

Google Scholar

Birhane, A. (2020). Algorithmic colonization of Africa. SCRIPTed 17, 389–409. doi: 10.2966/scrip.170220.389

Crossref Full Text | Google Scholar

Braun, V., and Clarke, V. (2021). Thematic analysis: A practical guide. London: SAGE Publications.

Google Scholar

Cohen, L., Manion, L., and Morrison, K. (2018). Research methods in education. 8th Edn. London: Routledge.

Google Scholar

Couldry, N., and Mejias, U. A. (2019). Data colonialism: rethinking big data’s relation to the contemporary subject. Telev. New Media 20, 336–349. doi: 10.1177/1527476418796632

Crossref Full Text | Google Scholar

Dahmm, H., and Moultrie, T. (2021). Avoiding the data colonialism trap. Trends: Thematic Research Network on Data and Statistics. Available online at: https://www.sdsntrends.org/blog/2021/datacolonialism

Google Scholar

Dale, R. (2015). Conjunctions of power and comparative education. Compare 45, 341–362. doi: 10.1080/03057925.2015.1006944

Crossref Full Text | Google Scholar

de Souza, D. E. (2014). Culture, context and society – the underexplored potential of critical realism as a philosophical framework for theory and practice. Asian J. Soc. Psychol. 17, 141–151. doi: 10.1111/ajsp.12052

Crossref Full Text | Google Scholar

Faroukhi, A. Z., El Alaoui, I., Gahi, Y., and Amine, A. (2020). An adaptable big data value chain framework for end-to-end big data monetization. Big Data Cogn. Comput. 4:34. doi: 10.3390/bdcc4040034

Crossref Full Text | Google Scholar

Godin, K., Stapleton, J., Kirkpatrick, S. I., Hanning, R. M., and Leatherdale, S. T. (2015). Applying systematic review search methods to the grey literature: a case study examining guidelines for school-based breakfast programs in Canada. Syst. Rev. 4:138. doi: 10.1186/s13643-015-0125-0

PubMed Abstract | Crossref Full Text | Google Scholar

Gorur, R., Dey, J., Faul, M. V., and Piattoeva, N. (2023). NORRAG –decolonising data in education: a discussion paper. Data and Evidence. Available onine at: https://www.norrag.org/decolonising-data-in-education-a-discussion-paper/

Google Scholar

Heyneman, S. P. (2003). The history and problems in the making of education policy at the World Bank 1960–2000. Int. J. Educ. Dev. 23, 315–337. doi: 10.1016/S0738-0593(02)00053-6

Crossref Full Text | Google Scholar

Hossain, M. (2023). Large-scale data gathering: exploring World Bank’s influence on national learning assessments in LMICs. Int. J. Educ. Dev. 102:102877. doi: 10.1016/j.ijedudev.2023.102877

Crossref Full Text | Google Scholar

Iyengar, R., Mahal, A. R., Aklilu, L., Sweetland, A., Karim, A., Shin, H., et al. (2016). The use of technology for large-scale education planning and decision-making. Inf. Technol. Dev. 22, 525–538. doi: 10.1080/02681102.2014.940267

Crossref Full Text | Google Scholar

Klees, S. J., Ginsburg, M., Anwar, H., Robbins, M. B., Bloom, H., Busacca, C., et al. (2020). The World Bank’s SABER: a critical analysis. Comp. Educ. Rev. 64, 46–65. doi: 10.1086/706757

Crossref Full Text | Google Scholar

Kohnke, L., and Foung, D. (2024). Deconstructing the normalization of data colonialism in educational technology. Educ. Sci. 14:1. doi: 10.3390/educsci14010057

Crossref Full Text | Google Scholar

Letouzé, E.. (2012). Big data for development: Challenges & opportunities. UN Global Pulse. Available online at: https://unstats.un.org/unsd/trade/events/2014/beijing/documents/globalpulse/Big%20Data%20for%20Development%20-%20UN%20Global%20Pulse%20-%20June2012.pdf

Google Scholar

Makwati, G., Audinos, B., and Lairez, T. (2003). The role of statistics in improving the quality of basic education in sub-Saharan Africa. ADEA biennial meeting (Mauritius), 1–33. Available online at: https://biennale.adeanet.org/2003/papers/2C_Nesis_ENG_final.pdf

Google Scholar

Mintz, S., and Saraogi, N.. (2015a). SABER education management information systems country report: Samoa (pp. 1–27). World Bank Group. Available online at: https://documents.worldbank.org/en/publication/documents-reports/documentdetail/430451467999117632/Samoa-Education-management-information-systems

Google Scholar

Mintz, S., and Saraogi, N. World Bank (2015b) SABER education management information systems country report: Solomon Islands (pp. 1–30) World Bank Group. Available online at: https://documents.worldbank.org/en/publication/documents-reports/documentdetail/127721468184764293/Education-management-information-systems

Google Scholar

Mintz, S., Saraogi, N., and Abdul-Hamid, H.. (2015). SABER education management information system country report: Papua New Guinea (pp. 1–29). World Bank Group. Available online at: https://documents.worldbank.org/en/publication/documents-reports/documentdetail/806951468187761672/SABER-education-management-information-system-country-report-Papua-New-Guinea-2015

Google Scholar

Molinari, J. (2024). A rational case for a critical realist theory of academic writing. J. Crit. Realism 23, 521–544. doi: 10.1080/14767430.2024.2429225

Crossref Full Text | Google Scholar

Montoya, S. (2018). Why we need effective education management information systems. Available online at: http://uis.unesco.org/en/blog/why-we-need-effective-education-management-information-systems

Google Scholar

Morgan, H. (2022). Conducting a qualitative document analysis. Qual. Rep. 27, 64–77. doi: 10.46743/2160-3715/2022.5044

Crossref Full Text | Google Scholar

Muyoya, C., Cisneros, A. J., and Železný-Green, R. (2022). 6 steps to get started on decolonizing data for development—Data.Org. Available online at: https://data.org/news/decolonizing-data-for-development/

Google Scholar

Nevo, I., and Slonim-Nevo, V. (2011). The myth of evidence-based practice: towards evidence-informed practice. Br. J. Soc. Work. 41, 1176–1197. doi: 10.1093/bjsw/bcq149

Crossref Full Text | Google Scholar

Nzama, L., Sithole, T., and Kahyaoglu, S. B. (2023). The impact of government effectiveness on trade and financial openness: the generalized quantile panel regression approach. J. Risk Financial Manag. 16:1. doi: 10.3390/jrfm16010014

Crossref Full Text | Google Scholar

OECD (2017). Development co-operation report 2017: Data for development. Paris: OECD Publishing.

Google Scholar

Ozga, J. (2009). Governing education through data in England: from regulation to self-evaluation. J. Educ. Policy 24, 149–162. doi: 10.1080/02680930902733121

Crossref Full Text | Google Scholar

Parkhurst, J. O. (2017). The politics of evidence: From evidence-based policy to the good governance of evidence. London: Routledge.

Google Scholar

Prakash, M., and Sethi, T. (2018). Measuring and responding to demand for official statistics. AidData’s Blog. Available online at: https://www.aiddata.org/blog/measuring-and-responding-to-demand-for-official-statistics

Google Scholar

Pritchett, L., and Sandefur, J. (2014). Context matters for size: why external validity claims and development practice do not mix. J. Glob. Dev. 4, 161–197. doi: 10.1515/jgd-2014-0004

Crossref Full Text | Google Scholar

Quijano, A. (2000). Coloniality of power and eurocentrism in Latin America. Int. Sociol. 15, 215–232. doi: 10.1177/0268580900015002005

Crossref Full Text | Google Scholar

Read, L., and Atinc, T. M.. (2017). Information for accountability: Transparency and citizen engagement for improved service delivery in education systems (pp. 1–52). The Brookings Institution. Available online at: https://www.brookings.edu/research/information-for-accountability-transparency-and-citizen-engagement-for-improved-service-delivery-in-education-systems/

Google Scholar

Ricaurte, P. (2019). Data epistemologies, the coloniality of power, and resistance. Telev. New Media 20, 350–365. doi: 10.1177/1527476419831640

Crossref Full Text | Google Scholar

Richardson, L., and St. Pierre, E. A. (2005). “Writing: a method of inquiry” in The Sage handbook of qualitative research. 3rd edn. eds. N. K. Denzin and Y. S. Lincoln (Thousand Oaks, CA: Sage Publications), 959–978.

Google Scholar

Riddell, A., and Niño-Zarazúa, M. (2016). The effectiveness of foreign aid to education: what can be learned? Int. J. Educ. Dev. 48, 23–36. doi: 10.1016/j.ijedudev.2015.11.013

Crossref Full Text | Google Scholar

Rossiter, J.. (2020). Link it, open it, use it: Changing how education data are used to generate ideas. Center for Global Development | Ideas to Action, 2023 (Feb 14). Available online at: https://www.cgdev.org/publication/link-it-open-it-use-it-changing-how-education-data-are-used-generate-ideas

Google Scholar

Saraogi, N., Mayrhofer, D. K., and Abdul-Hamid, H. (2017a). SABER education management information systems country report: Fiji (pp. 1–43). World Bank Group. Retrieved June 9, 2024. Available online at: https://documents.worldbank.org/en/publication/documents-reports/documentdetail/442961500371886004/SABER-education-management-information-systems-country-report-Fiji-2017

Google Scholar

Saraogi, N., Mayrhofer, D. K., and Abdul-Hamid, H. (2017b). SABER education management information systems country report: Tajikistan (pp. 1–26). World Bank Group. Available online at: https://documents.worldbank.org/en/publication/documents-reports/documentdetail/261831500373515546/SABER-education-management-information-systems-country-report-Tajikistan-2017

Google Scholar

Schöpfel, J., and Farace, D. (2010). Grey literature. In Encyclopedia of Library and Information Sciences (3rd edition, pp. 2029–2039). CRC Press. Available online at: https://lilloa.univ-lille.fr/handle/20.500.12210/63948

Google Scholar

Steiner-Khamsi, G. (2013). What is wrong with the ‘what-went-right’ approach in educational policy? Eur. Educ. Res. J. 12, 20–33. doi: 10.2304/eerj.2013.12.1.20

Crossref Full Text | Google Scholar

Subosa, M., and West, M. (2018). Re-orienting education management information systems (EMIS) towards inclusive and equitable quality education and lifelong learning (pp. 1–67). Available online at: https://unesdoc.unesco.org/ark:/48223/pf0000261943

Google Scholar

Tennant, S., and Clayton, A. (2010). The politics of infrastructural projects: a case for evidence-based policymaking. Int. J. Public Adm. 33, 182–191. doi: 10.1080/01900690903360029

Crossref Full Text | Google Scholar

Tolley, H., and Shulruf, B. (2009). From data to knowledge: the interaction between data management systems in educational institutions and the delivery of quality education. Comput. Educ. 53, 1199–1206. doi: 10.1016/j.compedu.2009.06.003

Crossref Full Text | Google Scholar

Trefzer, A., Jackson, J. T., McKee, K., and Dellinger, K. (2014). Introduction: the global south and/in the global north: interdisciplinary investigations. Glob. South 8, 1–15. doi: 10.2979/globalsouth.8.2.1

Crossref Full Text | Google Scholar

UNESCO and Global Partnership for Education (2020). The role of education management information systems in supporting progress towards SDG 4: Recent trends and international experiences. Available online at: https://unesdoc.unesco.org/ark:/48223/pf0000374542

Google Scholar

UNESCO and UNICEF (2019). Regional capacity development resource book on monitoring SDG4-education 2030 in Asia-Pacific—UNESCO digital library UNESCO, UNICEF. Available online at: https://unesdoc.unesco.org/ark:/48223/pf0000372228

Google Scholar

UNESCO Institute for Statistics. (2017a). Ed-data quality assessment framework (ed-DQAF) to evaluate administrative routine data systems: Manual for the conduct of an evaluation by a national technical team. Available online at: http://uis.unesco.org/sites/default/files/documents/training-workshop-manual-data-quality-assessment-framework-2017-en_0.pdf

Google Scholar

UNESCO Institute for Statistics. (2017b). The data revolution in education. UNESCO Digital Library. Available online at: https://unesdoc.unesco.org/ark:/48223/pf0000247780?posInSet=37&queryId=1b0f61a1-a7a9-487d-9994-2fef6cd3f230

Google Scholar

UNESCO Institute for Statistics. (2020). Operational guide to using EMIS to monitor SDG – Educational management information systems. Available online at: https://emis.uis.unesco.org/operational-guide/

Google Scholar

United Nations (2014). A world that counts: Mobilizing the data revolution for sustainable development. Report of the secretary general’s independent expert advisory group on a data revolution for sustainable development. Available online at: https://www.undatarevolution.org/wp-content/uploads/2014/12/A-World-That-Counts2.pdf

Google Scholar

Van Wyk, C., and Crouch, L. (2020). Efficiency and effectiveness in choosing and using an EMIS: Guidelines for data management and functionality in education management information systems (EMIS). Available online at: https://unesdoc.unesco.org/ark:/48223/pf0000374582

Google Scholar

World Bank (2011). Improving information systems for planning and policy dialogue: The SABER EMIS assessment tool (pp. 1–70) World Bank. Available online at: http://wbgfiles.worldbank.org/documents/hdn/ed/saber/supporting_doc/background/ems/saberemis.pdf

Google Scholar

World Economic Forum. (2011). Personal data: The emergence of a new asset class. Available online at: https://www.weforum.org/publications/personal-data-emergence-new-asset-class/

Google Scholar

Keywords: education management information system, EMIS, evidence-based, data-driven, decision-making, educational data, critical realism, data colonialism

Citation: El Mazbouh M, Shah R and Lee K (2025) If evidence matters, why does the data die? Implementing education management information systems (EMIS) in development contexts. Front. Educ. 10:1616717. doi: 10.3389/feduc.2025.1616717

Received: 23 April 2025; Accepted: 18 August 2025;
Published: 03 September 2025.

Edited by:

Heru Susanto, Indonesia Institute of Sciences (LIPI), Indonesia

Reviewed by:

Vince Hooper, SPJ Global, United Arab Emirates
Doniwen Pietersen, University of South Africa, South Africa

Copyright © 2025 El Mazbouh, Shah and Lee. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Manal El Mazbouh, bWFuYWwuZWxtYXpib3VoQGF1Y2tsYW5kLmFjLm56

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.