OPINION article
Front. Commun.
Sec. Culture and Communication
Volume 10 - 2025 | doi: 10.3389/fcomm.2025.1604361
Rebranding Empire in the Age of Generative AI
Provisionally accepted- Department of Computer Applications, Marian College Kuttikkanam Autonomous, Kuttikkanam, India
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The colonizers are back. They don't wear khaki anymore, nor do they come by steamship. They live in servers, speak in APIs, and follow the dominion of their training data (Nayel & Mohammed, 2024).They've come not in quest of land or gold, but for your tales, your symbols, your languages-and they'll gladly give you a ChatGPT-generated folk tale in return. Welcome to the age of algorithmic empire, where culture is compressed, simulated, and distributed at scale. If you thought colonialism was a thing of the past, think again-it's just gone cloudnative.This paper aims to provoke new critical thinking about AI and cultural sovereignty, rather than rehearsing familiar ethical fears. Digital colonialism, which initially described the dominance of Western tech platforms over the information ecosystems of the Global South, has assumed a new shape. In this age of generative AI, the threat is subtler and more insidious. AI does not merely disseminate culture in the manner that social media once did-it actually generates it (Qadri et al., 2024). The issue now is not so much a question of who gets to speak, but who gets to speak through the machine.Generative AI systems are increasingly positioned as cultural intermediaries, simulating and redistributing narratives drawn from vast datasets. While they are not the sole gatekeepers of culture, these systems exert a growing influence over what content is produced, circulated, and normalized-particularly in digital and educational contexts. In doing so, they may inadvertently reinforce epistemic and aesthetic hierarchies under the guise of neutrality (Nyaaba et al., 2024).One can now ask an AI to write a Ghanaian folktale, draw a Siberian shaman, or compose an "Indian-style" poem in flawless English-all without engaging with the individuals to whom these traditions belong. AI software is mimicking culture without its memory. This is not representation; it is re-creation-a digitalized form of appropriation masquerading as innovation.The largest language models are trained predominantly on widely available, Englishdominated data sources (Nyaaba et al., 2024). This is not a strict technological constraint, but rather a consequence of optimization strategies that prioritize scale, accessibility, and convenience. While these outcomes are not always consciously intended, this design trajectory has led, as some scholars have noted, to a silent, ongoing act of cultural appropriation-where underrepresented knowledge systems are excluded by default rather than by explicit design.These datasets are scraped from Wikipedia articles, Reddit forums, and Common Crawl archives. But whose knowledge is scraped? Which languages are missing? Oral traditions, indigenous epistemologies, and minority cultures are nowhere to be seen (Gillespie, 2024). Not because they lack value, but because they fail the expectations of the machine.Empirical studies show the consequences of this imbalance. For instance, an analysis of GPT-4o found that 44% of its ability to reflect a society's values is directly correlated to the amount of digital data available in that language, with error rates for low-resource languages being more than five times higher than for high-resource ones (Kazemi et al., 2024). Another benchmark study demonstrated that GPT models, when prompted in non-English languages, often default to English-centric cultural references-for example, suggesting Thanksgiving as a national holiday in countries where it is not celebrated-highlighting the dominance of Western perspectives in AI-generated content (Tao et al., 2023).When these models produce a story or proverb, they are not tapping into universal truths-they are remixing second-hand references, filtered through Western lenses. Devoid of the mystique of "AI-generated," this content is invested with false authority (Fernández, 2023). What you get is not a global synthesis, but a cultural mirage. This is the cultural logic of empire: appropriate what is useful, discard what is difficult, and repackage it for global consumption. The commodity is no longer gold or rubber-it is culture (Oliinyk, 2020). And the agents of this new empire are neural networks and engineers.A documented example of generative AI distorting cultural narratives can be seen in imagegeneration models like Midjourney, which have repeatedly misrepresented East Asian cultural symbols. For instance, users have reported that Midjourney often produces images of Japanese women dressed in traditional Chinese Hanfu or places Korean figures in settings with Japanese umbrellas-items that are culturally specific and not interchangeable (Zhao & Song, 2024). One user noted that an AI-generated image of a Japanese mother playing with her children showed her wearing a kimono in a park, which is culturally inaccurate as kimonos are not typical everyday wear in such contexts. These misplacements and stereotypical amalgamations create an exoticized but inaccurate portrayal of East Asian cultures, reflecting a superficial and decontextualized understanding rather than genuine cultural meaning (Hur, 2023). Similarly, in text generation, ChatGPT has been shown to produce traditional stories from the Philippines that are reshaped with Western narrative structures and sensibilities, often resembling Disneystyle plots rather than authentic oral traditions (Garces-Bacsal et al., 2016). This phenomenon exemplifies what Kelly-Holmes terms "algorithmic monolingualism," where local cultural nuances are flattened into globally consumable but culturally diluted forms. Furthermore, ChatGPT has been documented to whitewash or gloss over complex historical realities by defaulting to dominant Western narratives, thereby downplaying marginalized perspectives (Kazemi et al., 2024). These instances highlight how generative AI tools act as new cultural gatekeepers, mediating meaning through dominant data and alignment processes that erase or distort the richness of local and Indigenous narratives.Ask an AI image generator to create a "traditional African mask." You'll likely receive a composite of caricatured tribal lines, wood texture, and color patterns familiar from decades of Western media depictions. Ask it to generate an image of a "Hindu goddess in modern style," and you might receive a hypersexualized, Western-anime fusion of Kali or Durga. The issue is not inaccuracy so much as it is re-authoring (Zhou et al., 2024). The prompt doesn't retrieve culture; it prescribes it. And the model complies, generating a simulacrum that flatters the requester's expectations while erasing the original context (Bushey, 2023).A growing body of research highlights how generative AI models like Midjourney and Stable Diffusion distort culturally significant symbols. A 2024 study found that Stable Diffusion frequently produces racially and culturally homogenized imagery, such as portraying individuals from entire ethnic groups with a narrow set of stereotypical features, often blending distinct cultural elements inappropriately (Aldahoul et al., 2024). While not focused exclusively on Indigenous groups, the study illustrates a broader pattern where AI-generated content collapses diverse cultural identities into oversimplified, often inaccurate portrayals. Scholars have raised ethical concerns that such outputs, especially when widely shared on social and commercial platforms, perpetuate stereotypes and confuse public understanding of cultural authenticity (Amer, 2023). These findings echo concerns from Indigenous communities that AI tools, by relying on biased or decontextualized training data, may reinforce exoticized and misleading representations of their heritage. This is the tacit power of generative AI: it does not merely mimic-it rewrites. The user is the curator of culture, the AI its ghostwriter, and the source communities are-once againinvisible (Frenzke-Shim et al., 2024). It is colonialism by prompt engineering, with a friendly UI and a creative mode toggle. Even the most well-meaning ethical AI frameworks can become tools of symbolic inclusion-masking extractive dynamics beneath the language of 'responsible design' (Rajcic et al., 2024).We call attention to the phenomenon of epistemic injustice (Kay et al., 2024), whereby dominant knowledge-making systems disenfranchise other epistemologies-not only through erasure but through simulation and rewriting. Recent scholarship highlights the growing concern over how generative AI systems and global markets commodify and misrepresent culturally significant symbols like Ghanaian Kente cloth. While no direct study yet ties AI image generators such as DALL-E or Midjourney to the commercialization of Kente patterns, extensive documentation exists on how Kente has been adapted and mass-produced in ways that strip it of cultural specificity. For instance, Wrapped in Pride, a major exhibition and research project, showed how Kente has been transformed from sacred regalia into massmarket accessories across the African diaspora, often losing its original meaning in the process (Quick, 2022). (Boateng, 2014) further critiques how Kente designs are widely replicated without consent, arguing that current intellectual property frameworks fail to protect the cultural rights of Ashanti weavers. This trend is echoed in contemporary consumer markets, where demand for "African-inspired" textiles continues to grow, raising concerns over misrepresentation and economic displacement (Adeloye et al., 2023). Together, these sources illustrate a broader dynamic where traditional designs are extracted and reproduced in decontextualized forms, often to the detriment of source communities.Looking ahead, several promising directions emerge for future research and dialogue on generative AI and cultural representation. First, there is a pressing need for more empirical studies examining the deployment of generative AI tools in non-Western contexts, particularly within educational, artistic, and linguistic domains. Approaches such as ethnographic fieldwork and participatory action research could illuminate how local educators, artists, and language keepers interact with, adapt, or even reject AI-generated content, while content analysis of AI outputs-coded for cultural accuracy, stereotyping, and erasure-could offer quantitative insights into patterns of misrepresentation. Additionally, sociological inquiry should explore how marginalized communities perceive and resist algorithmic representations of their identities, using methods like focus groups, semi-structured interviews, community surveys, and sentiment analysis of social media discussions to capture lived experiences and emerging counternarratives. Interdisciplinary collaborations among AI developers, digital anthropologists, and postcolonial theorists are also crucial for developing cultural modelling frameworks that prioritize consent, context, and community control; the adoption of principles such as the CARE Principles for Indigenous Data Governance can help ensure AI systems respect community-defined boundaries and protocols. Finally, comparative research on data sovereignty policies across different geopolitical regions is needed to assess whether legal frameworks can effectively resist algorithmic exploitation, with policy analysis and case studies evaluating the impact of Indigenous data sovereignty laws and cultural impact assessments. By integrating these empirical methods-ethnography, content analysis, participatory research, sentiment analysis, and comparative policy studies-future research can move beyond theoretical critique to systematically document cultural harm, assess community responses, and identify pathways toward more just and accountable AI systems.This last critique gestures toward a more basic transformation in cultural production-one where representational authority is increasingly outsourced to predictive systems, and knowledge is divorced from the social, historical, and ethical entanglements that once moored it. This is not singularity, nor is it wisdom, nor progress. It is not imagination nor empathy. It is a grand illusion of creativity generated by machines that can replicate form without meaning, language without lineage, and vision without context. It is a pattern-matching engine trained on the digital detritus of the ruling class, automating familiarity and reinforcing power.The colonizer no longer carries a flag, but he still speaks for you. Only now, he does it in 175 billion parameters, embedded in your apps, scaled for your platforms, and tuned to placate your biases.The empirical examples cited throughout this article-of misrepresentation, appropriation, and erasure-underscore the urgency of this conceptual argument. They demonstrate that these are not abstract risks, but lived realities with material consequences for marginalized communities. As generative AI continues to evolve, it is essential that scholars, practitioners, and affected communities maintain rigorous, critical attention to the ways in which these technologies mediate, distort, and redistribute cultural authority. This is the empire of everything and nothing-an architecture of power without accountability, language without tradition, and story without authors. Only through continued empirical scrutiny and community-led resistance can we hope to challenge and reimagine the future of cultural production in the age of generative AI.
Keywords: Generative AI, digital colonialism, cultural appropriation, Epistemic injustice, cultural sovereignty
Received: 07 Apr 2025; Accepted: 23 May 2025.
Copyright: © 2025 S. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: DIVYA LAKSHMI S, Department of Computer Applications, Marian College Kuttikkanam Autonomous, Kuttikkanam, India
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.