CONCEPTUAL ANALYSIS article

Front. Polit. Sci., 02 July 2025

Sec. Politics of Technology

Volume 7 - 2025 | https://doi.org/10.3389/fpos.2025.1628139

This article is part of the Research TopicHuman Rights and Artificial IntelligenceView all 4 articles

Generative artificial intelligence and the risk of technodigital colonialism

  • Bioethics Graduate Program, University of Brasilia, Brasilia, Brazil

The use of Generative Artificial Intelligence has raised concerns related to plagiarism in scientific contexts. However, bad academic writing is far from being the main ethical challenge related to digital transformations in knowledge production. Additionally, science is not the only trust discourse affected, as journalism and law are deeply impacted in its social roles by the dissemination of artificially generated discourses. Power and knowledge are increasingly imbricated in digital society in a global context where colonial hierarchization, dehumanization and exploitation strategies are still in place. In response to the insufficiency of high-level moral principles before the ethical and Human Rights challenges brought by GenAI applications, this paper offers an alternative theoretical approach to digital ethics presented in the “decolonizing ethical thinking” section. The aim is to focus on the role that the new epistemic dynamics play to the risk of technodigital colonialism. Decoloniality readings should account for why the benefits and risks are not universally distributed and therefore may help ethical responses be more attentive to the connections between knowledge and power.

1 Introduction

The use of Artificial Intelligence (AI) has been the subject of increasing public debate. These systems have commonly been described as the newest and final solution for performing repetitive tasks, which would free up human time for more satisfying activities (Federspiel et al., 2023).

AI is far from new. Artificial Intelligence originated as a discipline back in 1956. At the time, the hope was that the field of human cognition would describe the functioning of human intelligence and its learning processes so well that a machine would be able to replicate them. If in the beginning the intention was to make a human-like machine, today the aim of AI is to overcome human limitations. Current applications go from virtual assistants to precision agriculture. However, the new waves of frenzy are due to the ability to emulate human writing presented by large language models (LLMs), a type of text-producing AI, of which the most notorious example is ChatGPT (Shanahan, 2024).

LLMs are generative mathematical models which emulate the statistical distribution of tokens — words, parts of words or individual characters — as they are found in collections of texts produced by humans and which serve as the basis for training the model. In simpler terms, the sentences of the text are produced from the probability of word-association in sentences (Shanahan, 2024).

The ever-growing ability to emulate human discourse and cultural products in the most diverse genres generates content on an unprecedented scale and challenges various practices, including arts and sciences (Thorp, 2023). The application of generative AI to scientific purposes is of special concern, especially in the field of health sciences, even more so if it informs automated decision-making in healthcare practices.

“Artificial Intelligence” is an umbrella term that covers several types of algorithms coded in different programming languages, with very different complexities and purposes. AI applications are most commonly the result of a combination of different algorithms and are generally meant to interact with other technologies, such as searching tools and social media, but also neurotechnology. The accrued aspects of this convergence are not sufficiently being taken into account in ethical reasoning.

There is a significant mismatch between the dimension and complexity of the challenges posed by digital transformations and high-level moral principles offered as guidelines. Not only do these principles lack enough of a grip on reality, but the debate led by ethics boards in tech companies may be part of diversionist tactics, intended to avoid regulation (Munn, 2023).

This kind of uselessness and toothlessness criticized in digital ethics can be identified in previous attempts to offer ethical guidelines to technoscience development. Similarly, from Nazi experiments to the Syphilis Studies performed in Tuskegee and in Guatemala; from the abuse of Henrietta Lacks cells to the implementation of the ethical double standard for HIV/AIDS research in Africa, the context which led to these violations of Human Rights performed in scientific contexts was not taken into enough consideration, and the subsequent ethical guidelines can all roughly be summarized in three main principles, namely autonomy, beneficence, and one unspecified, justice.

First, the inefficiency of those approaches can be attributed to the fact that the very existence of an ethical debate around an emergent technology means that the correct decisions are not self-evident. Therefore, those guidelines do not have any practical function. Secondly, those atrocities are not a mere result of lack of attention to the wishes of patients or participants. The reason all those attempts to draft ethical guidelines fail is a lesson to be learnt: it cannot be ignored that dehumanization is a previous and instrumental step to exploiting human lives.

This perspective sheds some light on the most commonly associated risk with generative AI, that of plagiarism or damage to scientific integrity related to the use of outputs of LLMs (Flanagin et al., 2023). Even though GenAI can be used in a wide variety of ways in academic writing, some with results which are not so far removed from online search engines (for example), LLMs can be used to formulate questions, suggest ideas or even change the argumentative structure of an article, interfering so much in the conception and writing style that authorship becomes a debatable matter (Kaebnick et al., 2023).

The moral background is that misappropriation is deemed as unacceptable academic practice, at least on an individual level. Paradoxically, the impact of using LLMs resulting from massive knowledge capture has not been proportionally discussed in most scientific endeavors.

Artists, in contrast, are denouncing the harmful nature of the appropriation of their works by generative AI (Allyn, 2023; Webster, 2023). In an unusual reenactment of the maxim “life imitates art,” Scarlett Johansson, who voiced the virtual assistant in the 2013 movie “Her,” alleged that in 2024 her voice was copied after she turned down OpenAI’s offer to voice Sky, the ChatGPT virtual assistant (Mickle, 2024). Following the controversy, the company replaced the disputed voice (OpenAI, 2024). In another example, Hayao Miyazaki and, more recently, Guillermo del Toro expressed disgust towards AI generated drawings and animations (Leatham, 2022; Leatham, 2023). They both justified their esthetic disapproval based on the absence of emotion and empathy in machines. These very human attributes that arts dedicate to depict are also fundamental to ethics.

It is possible to argue that other technological advances were firstly criticized, but then contributed to disseminate and popularize arts. Couldn’t GenAI be to many human creative practices a popularization tool similar to what sound recording was to music from the end of the nineteenth century?

Although one might entertain the idea of an increasingly shared way of producing knowledge and benefit from it, the first point in order to fulfill such promises is to recognize that plagiarism is far from the main ethical risk related to GenAI.

It is known that scientific discourse and practices have the effect of validating and remediating some forms of distress while silencing and oppressing others. Convergent technologies, such as AI, neurotechnology and genetics mobilize conceptualizations related to reason, rationality, mental health and intelligence which are historically linked to strategies of classification, hierarchization, discipline and exploitation in colonial power dynamics (Illes et al., 2025).

For more than a decade, Global South scholars from diverse disciplines have warned of negative and unjust effects related to digital technology. Tech companies increasingly reenact colonial power dynamics promoting global expansion to generate socio economic dependency on their activities and products. Democracies are dumped so the accumulation of value obtained from overexploitation of the workforce and environmental resources such as minerals and energy happen without accountability. All of it is made in the name of humanistic values, portrayed as a civilizing mission for the good of humanity, especially for “uncivilized” people (Nothias, 2025).

Although technodigital colonialism, data colonialism, algorithmic colonialism or digital colonialism are terms used interchangeably to describe the way colonial power dynamics relate to digital technologies, the literature addresses a diversity of effects from different geographical locations and academic areas (Nothias, 2025). For example, Ricaurte (2019) focuses on possible damaging consequences in terms of access to public services by racial minorities resulting from the national biometric ID project in Kenya. Colonial exploitation of mineral resources in the Democratic Republic of Congo and its major political, environmental and health effects for the local population, including those related to child labor were highlighted by Oyedemi (2019). Anti-colonial activists in India denounced Facebook for its strategy of creating dependency on digital services. In political contexts, predictive systems, fed with massive amounts of personal data, are used to implement social policies and for surveillance purposes all over the world without public debate about how these measures affect historically vulnerable populations, as Prasad (2018) describes.

In this paper, we illustrate with cases in the United States and Brazil how generative AI, while presenting itself as a neutral tool, may increase inequality by reenacting colonial power dynamics. A non-colonial theoretical framework subsequently addresses the centrality of the distinction between reason and nature to colonialism, exploring the ties between knowledge, power and subjectivation in a colonial matrix of power. Enunciation of knowledge is then proposed as the focus for ethical analysis, which allows to identify the production, enunciation and validation mechanisms related to trusted discourses, such as science, journalism and law in contrast to those related to artificially generated discourse.

2 GenAI ethical challenges

One of the most promising applications of AI is offering more efficient and rational resource allocation solutions. However, Obermeyer et al. (2019) warn of the possible harmful effects of using an AI designed to determine investment priorities in public health. The authors describe an algorithm based on treatment costs already spent, which is then used to recommend preventive health measures, reducing costs of therapeutic interventions. Although the cost of care was deemed to be an adequate proxy for the analysis, racial bias has gained scale with the widespread use of the algorithm by health insurance companies in the US. Since racism leads to less empathy for the pain and suffering of black and non-white patients, the algorithm ends up classifying therapeutic investment, but not the severity of the illness itself. By using the algorithm, white patients, who historically receive more resources and therapeutic investment, are also prioritized in preventive actions. The choice of this proxy is obviously problematic, and while the use of the algorithm appears to offer statistical based guidance, it feeds back and perpetuates racial inequality in healthcare on a larger scale.

Generative AI were largely used during Brazilian mayoral elections in 2024 to create images, videos and audio recordings, the so-called deepfakes, while tech platforms did little about it and even resisted judicial orders to prevent effects of misinformation in the elections. Historically marginalized from voting and representation, women running for office were especially targeted and the main victims of deepnudes, an accrued form of political and sexual violence. At least five female candidates filed police reports, including one that was running for mayor in São Paulo, the largest city in Brazil (Cruz et al., 2024).

An international divide is currently largely marked by an unmatched concentration of power related to digital technologies in the Global North. Peripheral countries mainly feature as datafarms and, despite the pervasiveness of digital surveillance in those contexts, the outputs of GenAI continue to not properly depict their diversity. Silveira and Lima (2024) studied Gemini’s outputs when asked in Brazilian Portuguese to describe white and black people in different contexts, such as laboral and leisure contexts. The GenAI offered gendered descriptions of human activities, with more detailed physical attributes in narratives about women. Even though the tool mentioned the importance of inclusion when describing white people performing activities, the reality of its functioning was contradictory. It significantly did not offer narratives when asked to describe black people performing different activities, under the argument it was ‘just a language model’.

GenAI is often conceived as part of the path to build Artificial General Intelligence (AGI), which would not only be capable of offering non-pre-programmed answers, but also learning on its own how to perform new tasks. AGI mobilizes human expectations to find deus ex machina answers to humanity’s problems. Many of the present utopian ideals imply convergence with other technologies and show reminiscence of the eugenic movement that occupied most of scientific hopes to answer social problems from the end of 19th century to the first half of the 20th century. The expectation of solving social problems without tackling their historical causes has not only proven to be wrong and inefficient, but often a camouflage of the intent to neglect or exploit even more vulnerable populations (Gebru and Torres, 2024).

3 Decolonizing ethical thinking

Behind all the most significant bioethical events, which are nothing less than Human Rights violations in scientific contexts, dehumanization is based on naturalized attributes (such as race, gender, sexual orientation and deficiencies). This mechanism comes from the core of modernity/colonialism which sustains the hierarchized distinction between reason and nature as one of its most distinctive ideas (Quijano, 2000). Not that this distinction between human and non-human based on rationality is new in western knowledge systematization, as it traces back to Ancient Greece. But, to modern science, which stems from a combination between cartesian dualism and Bacon empiricism, nature is not just inferior, it is something to master. More than that, the goal of science and destiny of mankind is to enlarge the epistemic empire over nature (Irving, 2006).

As a result, pursuing the mission civilisatrice placed colonized people in a natural situation of inferiority in relation to civilized humans. Thus, oppression and exploitation were perceived as a more than justifiable practice; they represented the fulfillment of a manifest destiny (Quijano, 2016). The use of scientific knowledge to consign human beings to the mere manifestation of biological attributes is the MO of much of the bias and prejudices since then.

The topicality of those mechanisms motivates coloniality as a term. Although used interchangeably with colonialism, the term highlights the historical continuity of colonial hierarchization strategies in contemporary power dynamics. Knowledge and power, but also subjectification, i.e., the formation of an individual conception of oneself, are intertwined from the inaugural power dynamics set by colonialism in the beginning of globalization (Quijano, 2019).

Coloniality of knowledge refers to the control over the enunciative/epistemic apparatus. Coloniality of being unfolds the ties between colonial power and lived experience. On one side of the spectrum of existence there is the universal human, a subject that produces knowledge, and on the other, a racialized and colonized being, a sub-alterity, a mere object about which humans produce knowledge (Quijano, 2019).

While recognizing that coloniality is lived and sensed in variable ways, Mignolo and Bussmann (2023) draw attention to a common aspect of it: the authorization/destitution mechanism, that legitimates specific enunciative discourses by invalidating others. Based on this, it is suggested that comprehending the role of knowledge in the colonial matrix of power is a matter of “focusing on the enunciation of Western knowledge, instead of on its enunciated content.”

In the context of AI and Big Data in general, the aspiration for protection of privacy and of intellectual property is depicted as obsolete if not egoistic, since knowledge as common good is the flagship of the defense of digital transformation. But if instead of the content, the attention goes to the way GenAI operates, a contradiction manifests itself:

Data about people today is less a public asset and increasingly privately funded, collected, and analyzed. (…) In the 21st century, a new transformation of social knowledge is underway, driven not by governments but by corporations. The huge increase in commercial knowledge of everyday life since the 1980s now dwarfs what states know about social subjects, a change accelerated by the emergence of commercial platforms. Such transformation empowered new corporate actors to render social life more “trackable and tractable.” This new model of social governance has fuzzy limits. Once a “social graph” is in place, no human interaction seems free from corporate intervention: The very notion of data-driven intervention implies a datafied social good to be actualized (Magalhães and Couldry, 2021).

Zuboff (2019) gives an insightful account of that dynamic as surveillance capitalism, a new phase where privacy is the main commercialized commodity. In this context, although data driven products are designed to be perceived as personal, privacy is sold in bulk in the form of aggregate data. The author was one of the first to unmask the “knowledge as common good” claims by pointing out that selling data about people is the trade which made all the billionaire fortunes since the 2000’s.

The economic dimension is just one of the aspects of data colonialism. Before the commercialization and data capture can take place, massive exploitation must be naturalized (Couldry and Mejias, 2018). In turn, naturalized exploitation is preceded by selective dehumanization. It is no coincidence that historical mechanisms of discrimination are updated and up-scaled while becoming progressively harder to detect. While the neutrality and immateriality of algorithmic functioning are described as the steps toward tackling society’s problems, technological advancements (such as GenAI) contribute to digital colonialism when uncritically developed and consumed (Faustino and Lippold, 2023).

Apocalyptic scenarios are not inevitable, but there is no reason to expect technobillionaires to know what humanity needs and what social goods are attainable and at what cost (Benjamin, 2024).

Technodigital approaches to progress lean into an artificial kind of intelligence and its generative abilities, superintelligence and deep learning. The lexicon mobilized in those expectations is unequivocal: knowledge production is central (Benjamin, 2024). The reason why data collection, machine learning and generative AI are changing human relations is not only because they have become a technical feasibility, but because these technologies have an increasing role in enunciation of legitimate discourses. The purpose of this paper is exactly to focus on the role that epistemic dynamics play regarding the risk of digital colonialism. If we are to imagine that technology will lead to common knowledge sharing, it is first necessary to ascertain that people are not being dispossessed or exploited by it.

4 Discussion

4.1 On knowledge

A decolonial reflexive exercise is interested in how knowledge enunciation may contribute to naturalize inequity, domination, and exploitation, and how these effects come to be naturalized. Then, in order to investigate the digital colonialism risk related to text generative AI, it is first and foremost required to examine how discourse is artificially generated and to ask why humans are convinced by it.

Artificially generated texts are considered trustworthy even if the workings of most algorithms are opaque to human knowledge. If it is increasingly difficult to distinguish human to non-human discourse, explaining the legitimacy of discourse cannot be reduced to the analysis of its content.

Long before “post-truth” was declared the word of the year in 2016, Foucault (1977a) argued that analyzing regimes of truth is not about asserting which is the “truest” truth in dispute, but about investigating how discourses come to be socially accepted and appropriated, while interacting in a mutually transformative relation with pre-existing beliefs and opinions. The regime of truth is a conceptual framework meant to assess how the enunciation of truth shapes social, economic and political arrangements while acting upon subjectification processes.

Power dynamics have a great part in determining whose discourses will be considered true, and how those, once being held truthful, will reflexively reinforce power positions. On the other hand, the discursive content bears a set of values that conform individual consciences and bodies, acting as subjectification mechanisms and, at the same time, reshaping social conduct.

Parallel to the rising of a digital society, lower levels of public trust in science, journalism and the legislative-legal system are seen. The correlation is obviously not just a coincidence and LLMs outputs are meant to be (and increasingly are) used as surrogate enunciation instruments. Algorithms are often opaque and inexplicable. Nevertheless, their outputs are trusted to be neutral, objective and accurate (Domingos, 2022).

LLMs seems to be part of a set of enunciation discourses based on majoritarianism. According to Lifton (2012), the phenomenon has to do with the expectation that something repeated many times by many people is less likely a lie when contrasted with something only said by a few people. This is in itself probabilistic thinking. In contrast, aspects related to scientific practices such as elaborating hypotheses, prior scientific foundation which allows a theory to be postulated and then tested by empirical method, are not involved in the production, enunciation or validation of knowledge by AI tools.

Often resorting to text generative AI and the gain in scale of discursive dissemination granted by social media, negationism often claims to unveil the real political character of science, law and journalism, when it should be obvious that politics is exactly the set of discourses and practices that structures social organization. Trust in science is partially due to the institutional and collective way it is developed. Rituals and specific rules of social recognition, mechanisms of control and normalization are ingrained in the scientific community. Therefore, affirming that scientific enunciation depends on previous theoretical foundation and empirical demonstration does not mean denying the social and political dimension of scientific enterprise, it is in fact quite the opposite. Scientific knowledge is a collective community endeavor that, according to Foucault (1976, 1977b), dictates normalcy parameters to mental health and human conduct, influencing the design of legal systems and civil organization.

Coloniality of knowledge intends to describe how scientific knowledge and its collective character, supposedly based on rationality, neutrality and objectivity, reserve for some the position of subjects of knowledge, while delegitimizing other epistemic practices, degrading colonized beings to the mere status of knowledge objects, not granting them the possibility to elaborate and use their own categories to describe themselves and their interest in phenomena. The epistemic privilege on dictating what is desirable and normal in human conduct in contrast with what should be deemed uncivilized, pathological or criminal is a pillar to historical colonial power dynamics (Mignolo and Walsh, 2018).

Although knowledge is always intertwined with power, trustworthiness in science, law and in journalism has to do with the possibility of the truth standing up against the majority, common sense and power. Investigation can reveal a journalistic scoop, provide decisive counter-evidence in law or start a scientific revolution. The possibility to prevail against widely accepted perception provides trust. For this reason, bearing resemblance to decolonial scholarship, Çelik and Haydari (2022) argue that feminist journalism can be seen as a decolonial resistant practice which goes back to non-western cultures. Public prestige of the media outlet ensures safety for those who confide in journalists and it also legitimizes the discourses and the veracity of facts in the public sphere.

Democratic societies are ideally structured according to science, law and freedom of press; therefore, these enunciative practices are not only political, but the very basis of democratic politics. The relation between knowledge and power is irrevocable. In contrast, enunciation as a result of a digital majoritarianism, based on the expectation that advanced and massive data processing mechanisms will lead to neutral, truthful and precise ways to conduct collective phenomena, has not only proven to be mistaken, as we unfortunately witnessed in pandemics and election interference, but a very political way of organizing society by evading accountability and its checks and balances. The authoritarian connotation of this kind of power dynamics is becoming more obvious every day.

Even while recognizing that the democratic world itself is a display of colonial reminiscent in its unjust dynamic power, majoritarianism disguised as democracy does not seem to be the answer. If the opaqueness of GenAI, from the data gathering to the algorithm processing mechanisms, is not properly dealt with, humanity faces unprecedented risks for perpetuating bias and social inequities in an upscaled technological colonialism.

4.2 On knowledge to self

Alienated from the legitimated position to enunciate trusted discourses about their own perceptions of reality, colonized beings end up with a mirror that can only offer a distorted reflection of themselves. This, states Quijano (2000), possibly generates the most cruel effect of coloniality, the fact that colonized beings do not want to coincide with themselves. Legitimate knowledge discourses offer parameters according to which resemblance, morality and rationality of peripheral beings are inferior, which makes desiring to not be oneself the mark of the colonized subject.

Coloniality of being differentiates subjects and objects of knowledge. With data-driven technologies, an extreme knowledge peripheralization process takes place and most of the world’s population is merely seen as a data source. Databases used for machine learning must be ethically sourced and representative. However, even considering the diversity in data collection, the resulting technology portrays supposedly universal values that are not representative of all subjectivities, which contributes to a sense of inadequacy and oppression of peripheral subjects. Hence, the significance of AI development not being a monopoly. Diverse existences and cosmovisions must influence technological design in all its phases. This will not only guarantee that artificially generated outputs will be more representative, but most importantly that more people will have a voice in the set of human problems that technologies are destined to solve (Gebru et al., 2021).

In the opposite direction, currently, most of the convergent technologies connect technoscientific improvement to promote ideals of productivity, efficiency and betterment of physical-cognitive perfection. Updating many eugenic propositions, whilst classifying bodies and its subjectivities in scales of value, technological colonialism may upscale discrimination and exploration of those deemed inferior (Gebru and Torres, 2024).

4.3 On knowledge to power

Algorithmic governmentality describes how information and subjectification are closely intertwined in power dynamics. Optimization of individual behavior and social interactions conform docile, predictable and productive social behavior. The emerging modes of governmentality are accompanied by the destabilization of other trustworthy discursive practices in western social organization (Rouvroy and Stiegler, 2016), mainly science, journalism and legal norms.

If initially more circumscribed to consumption and social interactions, digital tools such as GenAI are progressively assuming more active parts in the political arena and contributing to shape political orientation and election results. In the latter, humanity testifies an unseeing cumulation of economic power in big techs associated with authoritarian tendencies, all presented as a way of promoting more rational and efficient governmental practices (Harrington, 2024).

In peripheral countries, besides the concentration of power which leads to increasing immunity from democratic and social constraints, the arising power dynamics also work to enable and naturalize the unequally distributed effect of the use of energy and other natural resources, and the exploitation of low wage work (Faustino and Lippold, 2023).

5 Conclusion

Enunciation of knowledge is a promising focus for ethical analysis of technoscientific phenomena. It allows challenging the common assumption that data is simply data. The choices related to the purpose of AI, but also to the parameters and mechanisms of data collection and processing are not random or derived from a natural order of the world. On the contrary, all of these processes are part of a system of values and purposes which modulates subjectivity and human collective behavior.

Data do not lie; there is truth to it, but not because AI is objective and infallible, but because it alters the reality it describes to a large extent. By offering a perspective portrayed in the artificially generated discourse, GenAI has the power to effectively make it more relevant. In a circular effect, when texts composed by generative AI are disseminated on the Internet, they are then used to further train and give feedback to generative AI. Repeated a thousand times, the result is self-legitimation. All this happens while terms such as extraction, collection and processing act to reinforce the perception that truth emerges from numbers and in the absence of interest, bias or power.

Analyzing how GenAI, mainly LLMs, plays an increasing role in western enunciation apparatuses includes recognizing how it (through Big Data) issues discourses and how they are socially sanctioned. Decoloniality readings will account for why the benefits and risks are not universally distributed and will help ethical responses be more attentive and more able to stand against exploitative power dynamics.

Author contributions

LC: Writing – original draft, Writing – review & editing. MP: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Allyn, B. (2023). Movie extras worry they’ll be replaced by AI. Hollywood is already doing body scans. Available online at: https://www.npr.org/2023/08/02/1190605685/movie-extras-worry-theyll-be-replaced-by-ai-hollywood-is-already-doing-body-scan [Accessed October 17, 2024].

Google Scholar

Benjamin, R. (2024). Imagination: a manifesto. New York: WW Norton & Company.

Google Scholar

Çelik, B., and Haydari, N. (2022). Parrhesia as journalism: learning from the truth- and justice-seeking women journalists of twentieth century Turkey. J. Stud. 23, 1607–1624. doi: 10.1080/1461670X.2022.2096667

Crossref Full Text | Google Scholar

Couldry, N., and Mejias, U. A. (2018). Data colonialism: rethinking big data’s relation to the contemporary subject. Telev. New Media 20, 336–349. doi: 10.1177/1527476418796632

Crossref Full Text | Google Scholar

Cruz, M., Santos, N., Carreiro, R., Nóbrega, L., and Amorim, G. (2024). AI in the 2024 Brazilian elections. Salvador and São Paulo: Aláfia Lab & Data Privacy Brasil.

Google Scholar

Domingos, J. (2022). Foucault e a pós-verdade: reflexões sobre a contemporaneidade e os novos regimes de verdade. Policromias Rev. Estudos do Discurso, Imagem e Som 7, 280–298. doi: 10.61358/policromias.v7i1.52556

Crossref Full Text | Google Scholar

Faustino, D., and Lippold, W. (2023). Colonialismo digital: por uma crítica hacker-fanoniana. São Paulo: Boitempo Editorial.

Google Scholar

Federspiel, F., Mitchell, R., Asokan, A., Umana, C., and McCoy, D. (2023). Threats by artificial intelligence to human health and human existence. BMJ Glob. Health 8:e010435. doi: 10.1136/bmjgh-2022-010435

PubMed Abstract | Crossref Full Text | Google Scholar

Flanagin, A., Bibbins-Domingo, K., Berkwits, M., and Christiansen, S. L. (2023). Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA 329, 637–639. doi: 10.1001/jama.2023.1344

PubMed Abstract | Crossref Full Text | Google Scholar

Foucault, M. (1976). The history of sexuality: The will to knowledge, vol. 1. London: Penguin.

Google Scholar

Foucault, M. (1977a). The political function of the intellectual. Radic. Philos. 17, 12–14.

Google Scholar

Foucault, M. (1977b). Discipline and punish: The birth of the prison. New York: Pantheon Books.

Google Scholar

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., DauméIII, H., et al. (2021). Datasheets for datasets. Commun. ACM 64, 86–92. doi: 10.1145/3458723

Crossref Full Text | Google Scholar

Gebru, T., and Torres, É. P. (2024). The TESCREAL bundle: eugenics and the promise of utopia through artificial general intelligence. First Monday 29:13636. doi: 10.5210/fm.v29i4.13636

Crossref Full Text | Google Scholar

Harrington, B. (2024). Offshore: Stealth wealth and the new colonialism. New York: WW Norton & Company.

Google Scholar

Illes, J., Dudley, M., Urdzikova, L. M., Podina, I., and Pyrrho, M. (2025). The risk of neurotechnology as an instrument of colonialism. Brain Commun. 7:fcaf139. doi: 10.1093/braincomms/fcaf139

PubMed Abstract | Crossref Full Text | Google Scholar

Irving, S. (2006). ‘In a pure soil’: colonial anxieties in the work of Francis bacon. Hist. Eur. Ideas 32, 249–262. doi: 10.1016/j.histeuroideas.2006.03.001

Crossref Full Text | Google Scholar

Kaebnick, G. E., Magnus, D. C., Kao, A., Hosseini, M., Resnik, D., Dubljević, V., et al. (2023). Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing. Med. Health Care Philos. 26, 499–503. doi: 10.1007/s11019-023-10176-6

PubMed Abstract | Crossref Full Text | Google Scholar

Leatham, T. (2022). Guillermo del Toro and Hayao Miyazaki share the same stance on AI animation. Available online at: https://faroutmagazine.co.uk/guillermo-del-toro-hayao-miyazaki-ai-animation/ [Accessed May 5, 2025].

Google Scholar

Leatham, T. (2023). Hayao Miyazaki says he is ‘utterly disgusted’ by AI. Available online at: https://faroutmagazine.co.uk/hayao-miyazaki-on-ai-utterly-disgusted/ [Accessed May 5, 2025].

Google Scholar

Lifton, R. J. (2012). Thought reform and the psychology of totalism: A study of ‘brainwashing’ in China. Chapel Hill: UNC Press Books.

Google Scholar

Magalhães, J. C., and Couldry, N. (2021). Giving by taking away: big tech, data colonialism and the reconfiguration of social good. Int. J. Commun. 15, 343–362.

Google Scholar

Mickle, T. (2024). Scarlett Johansson said no, but OpenAI’s virtual assistant sounds just like her. Available online at: https://www.nytimes.com/2024/05/20/technology/scarlett-johannson-openai-voice.html [Accessed October 17, 2024].

Google Scholar

Mignolo, W. D., and Bussmann, F. S. (2023). Coloniality and the state: race, nation and dependency. Theory Cult. Soc. 40, 3–18. doi: 10.1177/02632764221151126

Crossref Full Text | Google Scholar

Mignolo, W. D., and Walsh, C. E. (2018). On decoloniliaty. Durham: Duke University Press.

Google Scholar

Munn, L. (2023). The uselessness of AI ethics. AI Ethics 3, 869–877. doi: 10.1007/s43681-022-00209-w

Crossref Full Text | Google Scholar

Nothias, T. (2025). An intellectual history of digital colonialism. J. Commun. 2025:jqaf003. doi: 10.1093/joc/jqaf003

Crossref Full Text | Google Scholar

Obermeyer, Z., Powers, B., Vogeli, C., and Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 447–453. doi: 10.1126/science.aax2342

PubMed Abstract | Crossref Full Text | Google Scholar

OpenAI. (2024). How the voices for ChatGPT were chosen. Available online at: https://openai.com/index/how-the-voices-for-chatgpt-were-chosen [Accessed October 17, 2024].

Google Scholar

Oyedemi, T. (2019). Global digital capitalism: Mark Zuckerberg in Lagos and the political economy of Facebook in Africa. Int. J. Commun. 13, 2045–2061.

Google Scholar

Prasad, R. (2018). Ascendant India, digital India: how net neutrality advocates defeated Facebook’s free basics. Media Cult. Soc. 40, 415–431. doi: 10.1177/0163443717736117

Crossref Full Text | Google Scholar

Quijano, A. (2000). “Colonialidad del poder, eurocentrismo y América Latina” in La colonialidad del saber: eurocentrismo y ciencias sociales. Perspectivas Latinoamericanas. ed. E. Lander (Buenos Aires: CLACSO), 118.

Google Scholar

Quijano, A. (2016). “Bien Vivir” — between “development” and the de/coloniality of power. Alternautas 3, 10–23. doi: 10.31273/alternautas.v3i1.1023

Crossref Full Text | Google Scholar

Quijano, A. (2019). Colonialidad del poder, eurocentrismo y América Latina. Espacio abierto 28, 255–301.

Google Scholar

Ricaurte, P. (2019). Data epistemologies, the coloniality of power, and resistance. Telev. New Media 20, 350–365. doi: 10.1177/1527476419831640

Crossref Full Text | Google Scholar

Rouvroy, A., and Stiegler, B. (2016). The digital regime of truth: from the algorithmic governmentality to a new rule of law. La Deleuziana 3, 6–29.

Google Scholar

Shanahan, M. (2024). Talking about large language models. Commun. ACM 67, 68–79. doi: 10.1145/3624724

Crossref Full Text | Google Scholar

Silveira, J. B., and Lima, E. A. (2024). Racial biases in AIs and Gemini’s inability to write narratives about black people. Emerg Med 2, 277–287. doi: 10.1177/27523543241277564

Crossref Full Text | Google Scholar

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science 379:313. doi: 10.1126/science.adg7879

PubMed Abstract | Crossref Full Text | Google Scholar

Webster, A. (2023). Actors say Hollywood studios want their AI replicas — for free, forever. Available online at: https://www.theverge.com/2023/7/13/23794224/sag-aftra-actors-strike-ai-image-rights [Accessed October 17, 2024].

Google Scholar

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: PublicAffairs.

Google Scholar

Keywords: artificial intelligence, generative artificial intelligence, large language models, digital ethics, digital colonialism

Citation: Cambraia L and Pyrrho M (2025) Generative artificial intelligence and the risk of technodigital colonialism. Front. Polit. Sci. 7:1628139. doi: 10.3389/fpos.2025.1628139

Received: 13 May 2025; Accepted: 13 June 2025;
Published: 02 July 2025.

Edited by:

Arkadiusz Modrzejewski, University of Gdansk, Poland

Reviewed by:

Mykola Polovyi, Comenius University, Slovakia
Tomasz Czapiewski, University of Szczecin, Poland

Copyright © 2025 Cambraia and Pyrrho. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Monique Pyrrho, cHlycmhvLm1vbmlxdWVAZ21haWwuY29t

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.