Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Commun., 14 July 2025

Sec. Media, Creative, and Cultural Industries

Volume 10 - 2025 | https://doi.org/10.3389/fcomm.2025.1614817

This article is part of the Research TopicTeaching and Assessing with AI: Teaching Ideas, Research, and ReflectionsView all 14 articles

Reflection AI: feeding the machine - the hidden labour behind AI tools and ethical implications for higher education

  • Social Sciences Division, Oxford Internet Institute, University of Oxford, Oxford, United Kingdom

As university instructors integrate AI tools, such as large language models (LLMs) into their pedagogy, they must grapple with the ethical and practical implications ofthese technologies. This reflection examines the overlooked labour of Cloudworkers and data workers whose contributions make AI systems functional. Drawing on insights obtained from Fairwork’s Cloudwork and AI research, it argues for the adoption of the Fairwork scoring system, as a methodology, as well as a heuristic, to guide ethical engagement with AI and urges higher education instructors and students to advocate for improved working conditions in AI supply chains. Additionally, it explores the multifaceted impacts of AI technologies on global labour markets, highlighting pathways to more equitable practises through education, policy, and institutional intervention. By centring the experiences of cloudworkers and data enrichment employees, the article urges various stakeholders to foster a more ethical approach to AI in higher education.

Introduction

The integration of AI tools like ChatGPT, Grammarly, and image generation systems into higher education classrooms has transformed how we teach and learn (Grassini, 2023). These technologies promise enhanced efficiency, creativity, and access to knowledge (Heaven, 2023; OpenAI, 2023). Yet, beneath their polished interfaces lies an invisible workforce of Cloudworkers and data workers whose labour sustains these systems. These workers often perform monotonous, underpaid, and emotionally taxing tasks, such as moderating harmful content or labelling data for machine learning algorithms (Arsht and Etcovitch, 2018; Fairwork, 2021; Hao and Seetharaman, 2023). This global workforce operates largely out of sight, raising critical ethical questions for educators and institutions that rely on AI tools.

This finding highlights a gap in literature on AI ethics in higher educational settings, where discussions on the ethical use of AI tools, such as applications which draw on large language models (LLMs), omit a critical engagement with the production of such tools, e.g., who are the actual workers that enable the use of AI tools in classrooms, and what are the conditions (from structural, global inequalities to unfair practises in the workplace) under which these tools are produced?1

This article provides some guidelines for diverse stakeholders, with a particular emphasis on higher-ed, including lecturers, postgraduate students, and institutional staff/administration, to help make informed decisions about the use of AI systems. These recommendations draw primarily on the Fairwork project which evaluates working conditions in companies that train the AI and gives them a Fairwork score that underlines where they align with Fairwork’s principles assessing fairness in the workplace, and where they fail to do so. As researchers of the Fairwork project, we additionally propose these stakeholders to familiarise themselves with Fairwork research approach, methodology and output, such as our annual scores evaluating working conditions in AI suppliers, to learn more about the companies and workers building the AI systems and choose service providers which attain a higher Fairwork score.

As educators in higher education, our responsibility extends beyond equipping students with the latest tools. In our respective institutions, we must critically examine the labour practises underpinning these technologies and question whether our pedagogical approaches inadvertently perpetuate exploitation. In this reflection, we argue that universities must recognise the hidden labour behind AI tools and adopt ethical frameworks, such as the Fairwork scoring system, and use it as a methodological framework and a heuristic tool, to ensure that their use aligns with principles of social justice and equity. Moreover, universities must not only educate students about these issues but also take an active role in driving policy changes that demand accountability from AI corporations. It should therefore be our responsibility to be reflexive on ways we bring AI systems into our universities and invite our students to be mindful of the precarious labour that enables such systems to exist.

The hidden workforce behind AI

Education—be it K-12 or higher-ed—has long been considered a site sustained in large part through the invisible or hidden labour provided by instructors (Staudt Willet and He, 2024). Scholars working on this topic have highlighted how teachers, as well as staff working in educational institutions, engage in hidden, invisible, and un-or-underpaid work. In recent years, in light of growing scholarship on AI, scholars have started to question how education has become a site of datafication, and how educators, as well as staff (admin, tech, etc.), sustain datafication by providing hidden and unpaid labour (Selwyn, 2021). In this paper, we focus on labour provided by workers who enable the very AI tools used in classrooms. In contrast to previous studies, the workers we highlight are not located in educational sites, such as schools or universities, but rather at home in front of their computers using Cloudwork platforms or in cubicles in BPOs often located thousands of miles away, feeding the machine day after day (Muldoon et al., 2024; Tubaro et al., 2020).

AI tools such as LLMs are often marketed as autonomous and efficient, but they are anything but independent (Shen et al., 2024). Every intelligent output relies on a foundation of human labour. This workforce, consisting primarily of Cloudworkers2 and data enrichment3 workers, plays a pivotal role in data services (such as data labelling), training AI systems, content moderation, and ensuring accuracy (Gray and Suri, 2019). Workers manually annotate data, train datasets, and refine algorithmic outputs. Tasks such as tagging images, labelling text for sentiment, and flagging inappropriate content are essential to building reliable AI models (Muldoon et al., 2024). Workers, moreover, review and filter vast quantities of harmful or explicit content to train AI moderation tools, often exposing themselves to psychologically harmful material without adequate mental health support (Roberts, 2016). They also correct errors, refine algorithmic outputs, and provide feedback to improve system performance. These tasks require precision and attention to detail but are typically undervalued in terms of pay and recognition (Fairwork, 2021).

These jobs are typically outsourced to Cloudwork platforms operating in a planetary labour market (Anwar and Graham, 2020) and business process outsourcing (BPO) firms in low-income countries where labour is cheap and labour protections are weak (Graham et al., 2017). However, it should be noted that in countries like Kenya, despite agendas that prioritise company narratives over worker wellbeing under the rubric of job creation (The Republic of Kenya, Presidency, 2024), there is growing worker activism. In Kenya alone, there are three worker organisations which organise data workers. These are Techworker Community Africa, the Data Labeler’s Association, and the African Content Moderators Union. Regardless, workers continue to face precarious employment conditions, including low wages that fail to meet local living standards, inconsistent hours and unpredictable income, and a lack of benefits such as health insurance, paid leave, or mental health resources (Ustek Spilda et al., in press).

Non-compensated worktime is one example of challenges experienced by those workers. Cloudworkers, on average, spent 8.5 h per week on unpaid tasks, such as applying for jobs or managing demanding clients (Fairwork, 2023a). A significant issue for Cloudworkers engaged in data enrichment tasks on microwork platforms is non-payment; a global survey reported that 27% of these workers encountered this problem (Ibid). Additionally, the same report indicated that these workers earned an average of USD 2.15 per hour.

The psychological toll of such work, especially in moderation roles that expose individuals to disturbing content, exacerbates the ethical concerns associated with these practises (Roberts, 2016). Whilst students and educators in wealthier regions benefit from the efficiency of AI tools, the labourers enabling these systems remain largely invisible, their contributions unacknowledged and undervalued. Expanding awareness and advocacy for these workers is vital to building a fairer technological ecosystem.

Implications for university instructors and students

The ethical challenges posed by AI tools are not limited to the corporate sector; they extend into higher education, where these technologies are increasingly central to teaching and learning. University instructors must recognise the human labour embedded in AI tools and educate students about this reality. University departments should acknowledge the challenges associated with the use of these services and adopt ethical practises in their procurement, deployment, and use by faculty, instructors and students. These ethical practises should include evaluating the services and systems implemented based on fair standards, and informing students of the precarious conditions that underscore the labour which enables such tools to exist. Here is one way of doing so: the human labour feeding AI could be incorporated into digital literacy or ethics components of the curriculum, fostering a deeper understanding of the global economy that sustains these tools (Noble, 2018). For example, when teaching with AI writing assistants like ChatGPT, instructors could include discussions about the data enrichment processes that enable these tools to function. This would provide students with a holistic understanding of the technology and encourage critical thinking about its ethical dimensions.

Encouraging reflective thinking about the production of AI and the labour involved in these educational processes aligns with a critical pedagogy perspective. This approach seeks to promote critical awareness of power imbalances and historically rooted issues, emphasising the necessity to challenge systems and advocate for social change as a “freedom practise” (Freire, 2005). In this context, academic institutions and their staff must illuminate the labour-intensive processes that underlie the AI systems they adopt. They must also address the power asymmetries and challenges faced by vulnerable social groups, including workers within these supply chains.

Teaching and learning are not neutral, as the ideology of traditional educational practises suggests (Giroux, 2024). Whilst some may view AI merely as a powerful tool to support the learning process, a critical perspective takes a different approach. This perspective seeks to expose how various forms of power and inequality—social, cultural, and economic—manifest in both formal and informal education for children and adults (Apple et al., 2009, p. 3). Such a critical take on AI in higher educational settings is essential because the theories and actions used to explain social phenomena “structure the possibilities for knowing, acting, feeling, reflecting, and transforming” (Robertson and Dale, 2015, p. 3). Consequently, the theories and perspectives on AI in the classroom significantly influence how students engage with this impactful technology.

AI tools should not be framed as substitutes for human creativity and critical thinking but as complements to them (Holstein and Aleven, 2020). Assignments could ask students to reflect on their experiences using AI tools, including the ethical considerations of relying on such technologies. Research in recent years has identified the risks and harms associated to AI (Slattery et al., 2024), including its environmental impacts (Valdivia, 2024). For instance, students might be tasked with researching the working conditions of data labellers or proposing ways to make AI supply chains more equitable. In doing so, the Fairwork scoring scheme (explained below) can provide a useful methodological framework, as well as a heuristic tool, to assess whether our day-to-day engagement with AI systems promote practises which are exploitative of the very workers, the so-called “human in the loop,” who enable these systems by constantly training the machine. One does not have to go the full length of the Fairwork research.

A simple desk research on companies that utilise workers to feed the AI machine could serve as a starting point to better inform users to decide whether to continue utilising AI tools. If, for example, ChatGPT is known to engage with firms who pay their workers below minimum wage, or demand unpaid labour, or do little to mitigate workers’ exposure to physical, or mental risk, these should serve as a red flag for the users, including educators, to seek alternatives, such as companies which take the additional measures to protect their workers as they train the AI. This should also serve as a red flag to administrative staff who often serve as decision makers bringing AI systems into universities. Reports published by Fairwork could serve as a guide to obtain more information on working conditions in BPOs. For example, educators, as well as administration, who encourage the use of translation and transcription platforms or buy institutional subscriptions, can refer to the several reports published by Fairwork to choose or subscribe to service providers/platforms which rank higher in Fairwork evaluations (Fairwork, 2022; Fairwork, 2023b; Fairwork, 2025). Students could also be reminded of the “invisible” or “ghost” workers that power the AI (Altenried, 2020) and be encouraged to learn more about the conditions that shape data workers’ experience at AI-training sites, such as BPOs, by reading reports published by the Fairwork project (e.g., Fairwork, 2023c). We therefore encourage educators to use these reports to further educate students on the risks and harms associated with AI work, and administrative staff to use our findings in making informed decisions in subscribing to AI service providers, to take heed in making ethical decisions regarding the adoption and use of AI tools. Additionally, instructors should model responsible AI use by emphasising transparency. This includes disclosing the AI tools used in teaching and discussing their potential ethical implications openly (Fairwork, 2021).

Moreover, universities have a unique opportunity to amplify these lessons through interdisciplinary collaborations that integrate insights from computer science, sociology, and economics. Hosting guest lectures, workshops, and public forums on AI ethics can provide students with diverse perspectives. Beyond the classroom, these initiatives could spur broader movements towards ethical AI practises within academia and beyond.

The role of the Fairwork framework

Whilst instructors can foster ethical awareness, institutional change is essential to addressing the systemic issues underpinning AI tools. This is where the Fairwork scoring system offers a critical intervention.

Data sources and evaluation process

Fairwork evaluates companies using a robust, data-driven methodology grounded in five core principles: fair pay, fair conditions, fair contracts, fair management, and fair representation (Fairwork, 2021). Its methodology begins with a thorough review of publicly available data, including company policies, terms and conditions, and public statements. This is supplemented by direct communication with the companies being assessed, providing them an opportunity to share additional information and clarify their practises. Worker interviews are a critical component of the evaluation process, offering firsthand insights into working conditions, pay structures, and contractual arrangements. This triangulation of sources ensures that evaluations are comprehensive and grounded in reality.

The data collected is analysed against the five Fairwork principles. For example, under the principle of fair pay, companies must demonstrate that workers earn at least the local minimum wage after accounting for expenses. For fair conditions, platforms are assessed on their ability to provide safe and healthy working environments, which includes protections against physical and psychological harm. The principle of fair contracts examines whether contracts are transparent and accessible, avoiding clauses that disproportionately disadvantage workers. Fair management focuses on mechanisms for dispute resolution and prevention of discrimination, whilst fair representation evaluates whether workers have a voice in governance and decision-making processes (Fairwork, 2021) (See Tables 1, 2).

Table 1
www.frontiersin.org

Table 1. Fairwork AI principles.

Table 2
www.frontiersin.org

Table 2. Fairwork Cloudwork (online work) principles.

The integration of rigorous evaluation methodologies and worker-centred advocacy distinguishes Fairwork as a transformative force in the AI and platform economy. Universities can enhance their engagement with Fairwork principles by developing partnerships that extend the framework’s applications to local and regional AI projects.

The ten-point scoring system

Companies are scored on a scale of up to ten points, with each principle contributing a maximum of two points. To achieve a full score (of 2) under a principle, companies must meet both basic and advanced criteria. For instance, to score both points for fair pay, a platform must ensure not only that workers earn above the minimum wage but also that they earn a living wage that accounts for local cost-of-living standards. Similarly, fair contracts require both clarity in terms and active measures to ensure contracts do not exploit workers’ lack of legal knowledge or bargaining power.

Scores are published annually, fostering accountability and incentivising continuous improvement. High-scoring companies are celebrated as exemplars, setting benchmarks for ethical practises within the industry. Conversely, lower scores serve as a call to action, urging companies to address deficiencies. The iterative nature of this scoring system ensures that companies remain motivated to enhance their labour practises year after year (Fairwork, 2021; Graham et al., 2025).

By adopting the Fairwork framework, universities can promote transparency and ethical accountability in their selection of AI tools. Institutions can use Fairwork scores to inform procurement decisions, ensuring that the tools they adopt align with their values. Additionally, by collaborating with Fairwork to audit the tools they use, universities can play an active role in advocating for improved working conditions in AI supply chains, setting a precedent for other sectors to follow.

Practical steps for universities

Whilst the scoring exercise is a key component of the Fairwork methodology, it should be noted that it is one way to the means, which is to improve working conditions for data workers. Fairwork is an action-research project (Alyanak et al., in press). What that means is that both our methodology, as well as our research output, is intended to bring change to the future of AI work. The scores are a starting point to a larger discussion with multiple stakeholders, where we show that a fairer future of work is possible in AI. For the companies we engage with, we show how this is possible by changing their policies and practises to increase their score. For the regulators, the scores help us make a statement for them to understand that there are top-level interventions needed to better regulate the digital economy. For workers, the scores help us weave webs of solidarity, where we not only remind them the wrongdoings in this economy, but also provide feedback on ways they can demand rights from the companies they work for. And for the larger public, the scores should serve as a reminder that behind the codes and algorithms, there are always workers, in flesh and blood, working day and night to perfect the machine.

It is, therefore, imperative that the institutions and educators reflect on the Fairwork framework to start up a conversation within departments and with students in classroom, and during office hours, over why even the most basic labour rights continue to be violated by the companies training the AI, and what action students, as consumers of AI tools, can take to demand more humane conditions. Educators, furthermore, are welcomed to include a text to course syllabi to remind students to be aware of the very workers—as well as the conditions they are subjected to--that enable the tools they use in completing assignments. Students, in short, should be invited to make informed decisions in using AI tools. On way of doing so is by inviting them to read Fairwork reports, which offer comprehensive insights into workers and companies that train the AI.

In addition to reminding students of the hidden aspects of AI labour, such as violations of labour rights, universities should also publicly commit to using Fairwork scores in decision-making processes for AI tools and digital services. This could involve integrating Fairwork scores into procurement policies, ensuring that only tools from companies meeting specific ethical thresholds are considered. Additionally, institutions should require vendors to demonstrate compliance with Fairwork principles during the bidding process, reinforcing the importance of fair labour practises. By establishing clear guidelines and accountability measures, universities can set a standard for ethical engagement with AI technologies (Noble, 2018). Moreover, they should partner with the organisation to audit AI tools used in teaching, research, and administration. Educating faculty and students is essential, with workshops and courses offered on the ethical implications of AI, emphasising the human labour behind these tools.

Another pressing issue related to AI is the increasing adoption of Generative AI systems in society, particularly in educational settings. Many universities and faculties are grappling with how to establish ethical guidelines for the use of these tools. These systems impact the educational process in various ways—serving as support or even substitute for student assignments and offering new methods for augmenting or automating teaching tasks. In light of Generative AI, there is a need for a critical approach that combines reflections on the underlying data work involved in GenAI development with inquiries into the broader impact of GenAI use on work. A team of leading researchers has recommended that institutions prioritise this topic in AI discussions and design social protections for both workers and skills development, which would apply to instructors as well as students (Global Partnership on AI, 2023).

Institutions must use their influence to push for stronger labour protections in AI supply chains, both nationally and globally. Universities can advocate for these protections by partnering with labour rights organisations and conducting independent audits of AI supply chains to identify violations and areas for improvement. For example, institutions could join coalitions like the Fairwork project to amplify their impact and align efforts with international standards. Past successes, such as universities influencing tech companies to adopt greener energy practises, demonstrate that academic institutions can affect significant industry change (Graham et al., 2017). Universities might also host conferences or publish reports to bring attention to labour issues in AI, thereby pressuring companies to commit to better practises.

Moreover, collaborative initiatives between universities and international organisations could facilitate the development of ethical guidelines for AI tools, ensuring that labour practises are central to global technology standards. By championing these causes, universities can reinforce their roles as leaders in social responsibility and innovation.

These recommendations may face challenges in implementation. Students, for example, may choose to forego ethical concerns amid pending deadlines, and institutions may procure services from providers who resort to ethically dubious labour practises. Teachers can face issues to critically approach AI use in the classroom when institutions do not have a policy or adopt permissive rules about this topic.

However, it is imperative for all users—be they university admin, instructors, or students—to be informed of the ethical debates that envelop the very tools they use, and to be critical of such use. This is why higher education institutions should discuss and implement policies and guidelines that acknowledge the problems in AI supply chains and provide orientations on how to address them in classrooms. Another alternative to addressing these challenges can be for universities to develop their own models with ethical considerations in mind. Whilst the development of large language models requires substantial resources, a consortium of universities promoting the use of AI in education could embark on a joint initiative to that end.

As underlined by Freire (2005), education fosters critical thinking, and educators should strive in their instruction to be critical of use of AI in educational setting. Such an approach would pave way towards challenging systems in place and working collectively towards social change.

Conclusion

The adoption of AI tools in education offers transformative potential but also implicates us in systems of global labour exploitation. As instructors and institutions, we have a moral obligation to acknowledge and address the hidden labour behind these technologies. By centring the experiences of Cloudworkers and data enrichment employees, we can foster a more ethical approach to AI in higher education. The Fairwork framework provides a practical and actionable pathway for achieving this goal. Universities must take the lead in promoting transparency, advocating for fair labour practises, and ensuring that the tools we use align with our values. This is not just about teaching with AI; it is about teaching responsibly, with an unwavering commitment to justice and equity for all workers, visible and invisible alike.

Expanding these efforts through global collaborations, interdisciplinary research, and active policy engagement will help ensure that AI tools serve as instruments of equity rather than exploitation. The ethical adoption of AI in education is not merely a challenge; it is an opportunity to model the values of justice and human dignity in a rapidly evolving technological landscape.

Data availability statement

The data analyzed in this study is subject to the following licenses/restrictions: The research draws on general findings from the Fairwork project. It does not rely on primary data in its analysis. Requests to access these datasets should be directed to aW5mb0BmYWlyLndvcms=.

Author contributions

MG: Writing – original draft, Writing – review & editing. OA: Writing – original draft, Writing – review & editing JV: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that no Gen AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^On the contrary, there is extensive literature on working conditions of data workers, including but not limited to Mann and Graham (2016), Miceli and Posada (2022), Muldoon et al. (2024), and Shestakofsky (2024). Literature on working conditions in BPOs that train the AI (e.g., ChatGPT, openAI) to date have been limited to journalistic accounts (see Perrigo, 2023; Rowe, 2023; Hao and Seetharaman, 2023).

2. ^Cloudwork can be defined as “remotely performed labour mediated by digital labour platforms – companies that connect workers with clients through a digital interface, exert control over and extract value through the labour process” (Howson et al., 2023, p. 733).

3. ^Data enrichment can be defined as “Data curation for the purposes of machine learning model development that requires human judegment and intelligence. This can include data preparation, cleaning, labelling, and human review of algorithmic outputs, sometimes performed in real time” (Partnership on AI, 2021, p. 9).

References

Altenried, M. (2020). The platform as factory: Crowdwork and the hidden labour behind artificial intelligence. Capital Class 44, 145–158. doi: 10.1177/0309816819899410

Crossref Full Text | Google Scholar

Alyanak, O., Bertolini, A., Ustek-Spilda, F., Valente, J., Warin, R., and Graham, M. (in press). “Action research: the Fairwork project” in The handbook of digital labour (Sage). Eds. E. Bulut, J. Y. Chen, R. Grohmann, K. Jarrett.

Google Scholar

Anwar, M. A., and Graham, M. (2020). Digital labour at economic margins: African workers and the global information economy. Rev. Afr. Polit. Econ. 47, 95–105. doi: 10.1080/03056244.2020.1728243

Crossref Full Text | Google Scholar

Apple, M. W., Au, W., and Gandin, L. A. (2009). “Mapping critical education” in The Routledge international handbook of critical education. eds. M. W. Apple, W. Au, and L. A. Gandin (New York, Oxford: Routledge), 3–19.

Google Scholar

Arsht, A, and Etcovitch, D. (2018). The human cost of online content moderation. Harvard Journal of Law and Technology. Available online at: https://jolt.law.harvard.edu/digest/the-human-cost-of-online-content-moderation (Accessed June 10, 2025).

Google Scholar

Fairwork. (2021). Fairwork labour standards in the platform economy. Available online at: https://www.fair.work (Accessed May 26, 2005).

Google Scholar

Fairwork (2022). Fairwork translation & transcription platform ratings 2022. Oxford, United Kingdom: University of Oxford.

Google Scholar

Fairwork (2023a). Work in the planetary labour market: Fairwork Cloudwork ratings 2023. Oxford, United Kingdom: University of Oxford.

Google Scholar

Fairwork (2023b). Fairwork translation & transcription platform ratings 2023. Oxford, United Kingdom: University of Oxford.

Google Scholar

Fairwork (2023c). Fairwork AI ratings 2023: The workers behind AI at Sama. Oxford, United Kingdom: University of Oxford.

Google Scholar

Fairwork. (2025). Cloudwork Report 2025: Advancing Standards in Digital Labour and AI Supply Chain Governance. Oxford, United Kingdom.

Google Scholar

Freire, P. (2005). Pedagogy of the oppressed (30th Anniversary Edition). New York: Continuum.

Google Scholar

Giroux, H. A. (2024). Educators as public intellectuals and the challenge of fascism. Policy Futures Educ. 22, 1533–1539. doi: 10.1177/14782103241226844

Crossref Full Text | Google Scholar

Global Partnership on AI. (2023). Policy brief: generative AI, jobs and policy response. GPAI, Montreal. Available online at: https://gpai.ai/projects/future-of-work/policy-brief-generative-ai-jobs-and-policy-response-innovation-workshop-montreal-2023.pdf (Accessed May 18, 2025).

Google Scholar

Graham, M., Alyanak, O., Bertoini, A., Feuerstein, P., Kuttler, T., Ustek Spilda, F., et al. (2025). Pressure and praise as an action research methodology: the case of Fairwork. Environ. Plan. A Econ. Space. doi: 10.1177/0308518X251336893

Crossref Full Text | Google Scholar

Graham, M., Hjorth, I., and Lehdonvirta, V. (2017). Digital labour and development: impacts of global digital labour platforms. Dev. Stud. Res. 4, 12–29. doi: 10.1177/1024258916687250

Crossref Full Text | Google Scholar

Grassini, S. (2023). Shaping the future of education: exploring the potential and consequences of AI and ChatGPT in educational settings. Educ. Sci. 13:692. doi: 10.3390/educsci13070692

Crossref Full Text | Google Scholar

Gray, M. L., and Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Boston: Houghton Mifflin Harcourt.

Google Scholar

Hao, K, and Seetharaman, D. (2023) Cleaning up ChatGPT takes heavy toll on human workers, Wall Street J. Available online at: https://www.wsj.com/tech/chatgpt-openai-content-abusive-sexually-explicit-harassment-kenya-workers-on-human-workers-cf191483 (Accessed June 10, 2025)

Google Scholar

Heaven, W. D. (2023). “ChatGPT is going to change education, not destroy it.” MIT Technhoology Review. Available online at: https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/ (Accessed May 18, 2025).

Google Scholar

Holstein, K., and Aleven, V. (2020). Designing for human–AI complementarity in K-12 education. AI Mag. 43, 239–248. doi: 10.1002/aaai.12058

Crossref Full Text | Google Scholar

Howson, K., Johnston, H., Cole, M., Ferrari, F., Ustek-Spilda, F., and Graham, M. (2023). Unpaid labour and territorial extraction in digital value networks. Global Networks, 23, 732–754. doi: 10.1111/glob.12407

Crossref Full Text | Google Scholar

Mann, L., and Graham, M. (2016). The domestic turn: business process outsourcing and the growing automation of Kenyan organisations. J. Dev. Stud. 52, 530–548. doi: 10.1080/00220388.2015.1126251

Crossref Full Text | Google Scholar

Miceli, M., and Posada, J. (2022). The data-production Dispositif. Proc. ACM Hum. Comput. Interact. 6, 1–37. doi: 10.1145/3555561

PubMed Abstract | Crossref Full Text | Google Scholar

Muldoon, J., Graham, M., and Cant, C. (2024). Feeding the machine: The hidden human labour powering AI. London: Cannongate/New York: Bloomsbury.

Google Scholar

Noble, S. (2018). Algorithms of oppression: How search engines reinforce racism. New York: New York University Press.

Google Scholar

OpenAI. (2023). “Teaching with AI.” Available online at: https://openai.com/index/teaching-with-ai/ (Accessed May 18, 2025).

Google Scholar

Partnership on AI. (2021). “Responsible sourcing of data enrichment services. Available online at: https://partnershiponai.org/paper/responsible-sourcing-considerations/ (Accessed March 21, 2025).

Google Scholar

Perrigo, B. (2023). “Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic”, Time. Available online at: https://time.com/6247678/openai-chatgpt-kenya-workers/ (Accessed June 10, 2025).

Google Scholar

Robertson, S., and Dale, R. (2015). Toward a critical cultural political economy of the globalisation of education. Globalisation. Soc. Educ. 13, 149–170. doi: 10.1080/14767724.2014.967502

Crossref Full Text | Google Scholar

Roberts, S. T. (2016). Commercial content moderation: digital laborers' dirty work, In The intersectional Internet: race, sex, class and culture online, eds Noble, S. U & Tynes, B. New York: Peter Lang Publishing, Inc, 147–160.

Google Scholar

Rowe, N. (2023). ‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models, Guardian. Available online at: https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai (Accessed June 10, 2025)

Google Scholar

Selwyn, N. (2021). The human labour of school data: exploring the production of digital data in schools. Oxf. Rev. Educ. 47, 353–368. doi: 10.1080/03054985.2020.1835628

Crossref Full Text | Google Scholar

Shen, Y., Shao, J., Zhang, X., Lin, Z., Pan, H., Li, D., et al. (2024). Large language models empowered autonomous edge AI for connected intelligence. IEEE Commun. Mag. 62, 140–146. doi: 10.1109/MCOM.001.2300550

Crossref Full Text | Google Scholar

Shestakofsky, B. (2024). Cleaning up data work: negotiating meaning, morality, and inequality in a tech startup. Big. Data Soc. 11. doi: 10.1177/20539517241285372

Crossref Full Text | Google Scholar

Slattery, P., Saeri, A. K., Grundy, E. A., Graham, J., Noetel, M., Uuk, R., et al. (2024). The AI risk repository: a comprehensive meta-review, database, and taxonomy of risks from artificial intelligence. doi: 10.13140/RG.2.2.28850.00968

Crossref Full Text | Google Scholar

Staudt Willet, K. B., and He, D. (2024). Educators' invisible labour: a systematic review. Rev. Educ. 12:e3473. doi: 10.1002/rev3.3473

Crossref Full Text | Google Scholar

The Republic of Kenya, Presidency (2024). “Government’s plan to create 1 million jobs.” Available online at: https://www.president.go.ke/governments-plan-to-create-1-million-jobs/

Google Scholar

Tubaro, P., Casilli, A. A., and Coville, M. (2020). The trainer, the verifier, the imitator: three ways in which human platform workers support artificial intelligence. Big Data Soc. 7:205395172091977. doi: 10.1177/2053951720919776

PubMed Abstract | Crossref Full Text | Google Scholar

Ustek Spilda, F., Brittain, L., Alyanak, O., and Graham, M. (in press). “Datafication, surveillance and automation: capturing workers’ experience in the digital economy with Fairwork AI principles” in Job quality in a turbulent era. eds. A. Piasna and J. Leschke (European Trade Union Institute (ETUI)).

Google Scholar

Valdivia, A. (2024). The supply chain capitalism of AI: a call to (re) think algorithmic harms and resistance through environmental lens. Inf. Commun. Soc., 1–17. doi: 10.1080/1369118X.2024.2420021

Crossref Full Text | Google Scholar

Keywords: artificial intelligence, working conditions, Fairwork, higher education, AI ethics

Citation: Graham M, Alyanak O and Valente JCL (2025) Reflection AI: feeding the machine - the hidden labour behind AI tools and ethical implications for higher education. Front. Commun. 10:1614817. doi: 10.3389/fcomm.2025.1614817

Received: 02 May 2025; Accepted: 25 June 2025;
Published: 14 July 2025.

Edited by:

Kelly Merrill Jr., University of Cincinnati, United States

Reviewed by:

Kenzo Seto, Federal University of Rio de Janeiro, Brazil
Thomas Sommerer, Johannes Kepler University of Linz, Austria

Copyright © 2025 Graham, Alyanak and Valente. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Oğuz Alyanak, b2d1ei5hbHlhbmFrQG9paS5veC5hYy51aw==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.