Your new experience awaits. Try the new design now and help us make it even better

MINI REVIEW article

Front. Hum. Dyn., 13 January 2026

Sec. Digital Impacts

Volume 7 - 2025 | https://doi.org/10.3389/fhumd.2025.1697293

This article is part of the Research TopicArtificial Intelligence and Social Equities: Navigating the Intersectionalities in a Digital AgeView all 4 articles

A critical review of AI-enhanced communication strategies in Southeast Asian startup ecosystems

  • Faculty of Social and Political Sciences, Department of Communication Sciences, Hasanuddin University, Makassar, Indonesia

This mini review examines the intersection of digital disinformation, artificial intelligence technologies, and cultural identity construction within Southeast Asian startup ecosystems. Through systematic analysis of 68 peer-reviewed studies published between 2019 and 2024, we synthesize the current understanding of how AI-enhanced communication strategies both combat and potentially amplify disinformation while navigating complex cultural identity negotiations. Research demonstrates that culturally sensitive AI implementation increases funding success rates by 34% and market reach by 58% among religious minority entrepreneurs when appropriate algorithms are employed. The review identifies three critical research areas: algorithmic bias in cultural content moderation, the role of religious identity in digital entrepreneurship, and platform-specific adaptations of traditional communication frameworks. Our analysis reveals significant gaps in cross-cultural AI ethics research and proposes the “Cultural-AI Communication Convergence” (CAICF) framework, integrating Islamic communication principles with contemporary digital marketing practices. Findings suggest that while AI technologies offer promising solutions for combating disinformation, implementation must account for diverse cultural values and religious sensitivities to avoid marginalizing minority voices in digital spaces.

1 Introduction

The rapid proliferation of artificial intelligence technologies in digital communication has fundamentally transformed how information spreads across cultural boundaries, particularly within the dynamic startup ecosystems of Southeast Asia. As the region emerges as a global digital economy powerhouse valued at over $130 billion (Google, Temasek, and Bain, 2024), with Indonesia, Malaysia, and Singapore leading technological innovation, the intersection of AI-enhanced communication strategies and cultural identity construction presents both unprecedented opportunities and significant challenges for information integrity and cross-cultural understanding.

Recent developments in digital disinformation research have revealed complex patterns of how false information spreads through algorithm-mediated platforms, with particular implications for culturally diverse societies (Vosoughi et al., 2018). The foundational work on algorithmic bias (Benjamin, 2020; Buolamwini and Gebru, 2018) provides critical context for understanding how AI systems can perpetuate cultural discrimination, revealing that machine learning models often exhibit significant accuracy disparities across different demographic groups. The phenomenon becomes increasingly complex when examining how entrepreneurs and digital influencers from religious and cultural minority backgrounds navigate AI-driven communication tools while maintaining authentic expressions of their identity. This intersection represents a critical yet underexplored area of scholarship that demands comprehensive theoretical and empirical investigation.

Southeast Asia’s unique demographic composition, combining the world’s largest Muslim population with rapid digital transformation, creates distinctive conditions for examining how AI technologies interact with traditional cultural values and communication practices. The region’s startup ecosystems have experienced remarkable growth, with digital platforms increasingly mediating business communication, personal branding, and community engagement across diverse religious and ethnic communities (Theresia et al., 2025). The emergence of “platform-specific masculinities” and culturally adapted AI-driven branding strategies, as documented in recent Indonesian media studies (Sonni et al., 2025), illustrates how digital technologies are reshaping traditional communication paradigms while simultaneously creating new forms of cultural expression and identity negotiation. These developments necessitate a critical examination of how AI systems can be designed and implemented to respect cultural diversity while effectively combating misinformation and promoting authentic communication.

2 Methods

This mini-review followed a systematic approach adapted from the PRISMA guidelines (Figure 1). We searched three major databases (Web of Science, Scopus, IEEE Xplore) using the search string: (“artificial intelligence” OR “AI” OR “machine learning”) AND (“cultural identity” OR “religious identity” OR “Islamic”) AND (“Southeast Asia” OR “Indonesia” OR “Malaysia” OR “Singapore”) AND (“startup” OR “entrepreneur” OR “digital communication”). Searches covered peer-reviewed publications from January 2019 to September 2024, yielding 2,335 initial records after duplicate removal.

Figure 1
PRISMA flow diagram illustrating the selection process of studies. Initially, 2,335 records were identified (2,246 from a database search and 89 from other sources). After title screening, 1,488 were excluded as irrelevant. During abstract screening, 847 were assessed, and 436 were excluded for wrong context or focus. Full-text review narrowed it to 411, excluding 328 for insufficient data or quality. Finally, 68 studies met criteria: 42 empirical, 15 theoretical, and 11 mixed.

Figure 1. PRISMA flow diagram showing systematic review process from initial database search (n = 2,335) to final included studies (n = 68). The diagram illustrates three-stage screening with exclusion reasons at each stage.

Studies were included if they addressed at least two of three core themes: AI-enhanced communication, cultural/religious identity, and digital entrepreneurship in Southeast Asian contexts. Exclusion criteria included: pure technical AI development without cultural context, studies focused outside Southeast Asia that lacked comparative analysis, opinion pieces, or non-peer-reviewed sources, and studies lacking empirical data or systematic theoretical frameworks.

Two independent reviewers conducted a three-stage screening process: title screening (2,335 records to 847 records), abstract screening (847 records to 411 records), and full-text review (411 records to 83 records). Inter-rater reliability was substantial (Cohen κ = 0.82). Disagreements were resolved through discussion, with a third expert consulted for cases that were contested. Quality assessment using adapted Mixed Methods Appraisal Tool criteria resulted in the final inclusion of 68 studies (15 were excluded due to low quality scores).

Data extraction followed a standardized protocol that focused on the following areas: AI application types, cultural/religious elements, entrepreneurial contexts, disinformation management approaches, and identity construction patterns. Synthesis employed narrative thematic analysis organized around the three critical research areas identified in the review objectives.

3 Literature review and theoretical framework

Current scholarship on digital disinformation and AI-enhanced communication reveals several distinct yet interconnected research streams that inform our understanding of cultural identity construction in digital spaces. The foundational work on information cascades and algorithmic amplification provides essential context for understanding how misinformation spreads through AI-mediated platforms. Meanwhile, recent developments in platform studies (Gillespie, 2018; van Dijck, 2024) illuminate the culturally specific ways in which different digital environments shape communication practices.

Groundbreaking research by Sonni (2025) on economic conspiracy theories in Indonesian social media reveals the need for significant updates to traditional information cascade models, particularly in digital environments where algorithmic curation creates “super-cascades” that surpass natural information flow patterns. This work reveals that conspiracy theories spreading through Indonesian platforms achieve remarkable cultural adaptation, incorporating local religious and ethnic elements that enhance their persuasive power beyond what conventional misinformation models would predict. The study’s Digital Disinformation Behavior Model illustrates how algorithmic amplification, social identity processes, and protection motivation combine to create new forms of information propagation that challenge traditional assumptions about rational information processing.

The intersection of artificial intelligence and cultural identity construction has been extensively examined through recent studies of platform-specific gender representations and entrepreneurial communication strategies. Research on Indonesian reality shows reveals how different digital platforms facilitate distinct forms of masculine expression, with TikTok showing 45% creative expression content compared to traditional television’s 65% conventional representations (Sonni et al., 2025). These findings align with broader research on how the world changed social media (Miller et al., 2016), which demonstrates that digital technologies must adapt to local cultural contexts rather than imposing universal standards. These findings suggest that platform affordances actively shape the expression of cultural identity, rather than simply distributing existing content, highlighting the need for culturally sensitive AI algorithm design.

Contemporary research on AI-driven personal branding among hijabi entrepreneurs offers crucial insights into how religious minority entrepreneurs navigate algorithm-mediated communication while maintaining their authentic spiritual identity (Putri and Sonni, 2025). A comprehensive analysis of Indonesia’s hijabi startup ecosystem reveals that culturally sensitive AI implementation increases funding success rates by 34% and market reach by 58% when appropriate algorithms respect rather than marginalize religious identity markers. This research introduces the “Halal Personal Branding Framework,” demonstrating how Islamic communication principles can be successfully integrated with contemporary digital marketing practices through thoughtful design of AI systems. These findings are supported by broader research on Islamic entrepreneurship (Ahmad et al., 2010; Ramadani et al., 2015), which demonstrates how religious values can enhance rather than constrain business innovation.

The theoretical framework of “Cultural-AI Communication Convergence” emerges from synthesizing these diverse research streams. This conceptual model proposes that effective AI-enhanced communication in culturally diverse contexts requires three interconnected processes: algorithmic cultural sensitivity training, community-centered design approaches, and continuous feedback mechanisms that prevent cultural marginalization. The framework challenges technology-deterministic approaches to digital communication by emphasizing the active role of cultural values in shaping how AI systems should be designed and implemented, building on theoretical insights from surveillance capitalism theory (Zuboff, 2019) and algorithmic governance research (Katzenbach and Ulbricht, 2019).

Recent developments in algorithmic fairness research provide additional theoretical foundations for understanding how AI systems can either enhance or suppress cultural diversity in digital communication. Studies of algorithmic bias in religious content analysis reveal how mainstream AI tools often misinterpret or marginalize Islamic communication patterns, highlighting the critical importance of inclusive training data and culturally informed algorithm development (Noble, 2018). These findings align with broader research on algorithmic sovereignty, which emphasizes community control over digital representation and the need for AI systems that respect and preserve cultural diversity.

The emergence of information pandemic phenomena, as analyzed by Surjatmodjo et al. (2024), demonstrates how disinformation spread on social media platforms creates complex challenges for state resilience and social cohesion. Their critical review reveals how traditional approaches to combating false information often fail to account for cultural specificities in information processing and community trust patterns. This research highlights the importance of developing culturally informed counter-disinformation strategies that leverage rather than suppress local communication practices and community knowledge systems, thereby complementing broader research on the global disinformation order (Bradshaw and Howard, 2019).

4 Current research gaps and controversies

Despite significant advances in understanding digital disinformation and AI-enhanced communication, several critical research gaps persist that limit our comprehension of how these phenomena intersect with cultural identity construction in Southeast Asian contexts. The most significant gap involves the lack of longitudinal studies examining how culturally sensitive AI implementations evolve over time and their long-term impacts on community communication practices and identity expression.

The controversy surrounding algorithmic transparency and cultural representation represents another significant area of scholarly debate. Some researchers argue for complete algorithmic transparency to enable community oversight of AI systems. In contrast, others contend that excessive transparency could enable manipulation by bad actors seeking to exploit cultural identity markers for disinformation campaigns (Hao, 2019). This debate becomes particularly complex when examining how religious and ethnic minority communities can maintain authentic expression while protecting themselves from algorithmic discrimination. The challenges of algorithmic accountability (Ananny and Crawford, 2016) suggest that this balance becomes increasingly complex as platforms implement opaque recommendation systems.

Methodological controversies persist regarding the appropriate approaches for studying culturally sensitive AI implementations. Traditional computer science approaches to algorithm evaluation often fail to capture subtle cultural nuances that determine whether AI systems enhance or suppress authentic cultural expression. Recent attempts to integrate ethnographic methods with computational analysis have shown promise but remain underdeveloped in the specific context of Southeast Asian startup ecosystems and digital entrepreneurship, as highlighted by research on the evolution of telecommunications policy in Asia (Kshetri, 2017).

Emerging trends in Southeast Asian digital governance reveal an increasing sophistication in regulatory frameworks. Indonesian regulators (KPI, KOMINFO) have implemented graduated response systems emphasizing education alongside enforcement, while Malaysian authorities (MCMC) balance cultural sensitivity requirements with innovation support. Singapore’s more direct approach through POFMA demonstrates alternative regulatory philosophies. Research on information pandemic dynamics (Surjatmodjo et al., 2024) reveals that 78% of studies document concerns about freedom of expression, highlighting the critical need for regulatory frameworks that protect authentic cultural expression while combating genuine disinformation. Effective governance requires adaptive policy frameworks balancing innovation with cultural preservation and community autonomy.

The tension between combating disinformation and preserving cultural authenticity represents the most significant theoretical controversy in current scholarship. While AI systems show remarkable effectiveness at identifying and countering certain forms of false information (Vosoughi et al., 2018), they may simultaneously suppress authentic cultural expressions that appear anomalous to algorithms trained on dominant cultural patterns. This challenge becomes particularly acute when examining how religious communication practices intersect with AI-mediated platform governance, as revealed by internet research custodians (Gillespie, 2018).

Table 1 illustrates the varying levels of cultural integration achieved through different components of the Cultural-AI Communication Framework, with corresponding key studies that inform our understanding of each component. The analysis reveals that while technical approaches to content authenticity and disinformation detection achieve moderate cultural integration, areas requiring deeper cultural understanding, such as religious identity recognition and community feedback mechanisms, demonstrate higher integration potential when properly implemented, as evidenced by research on hijabi entrepreneurs (Putri and Sonni, 2025) and female entrepreneurship more broadly (Theresia et al., 2025).

Table 1
www.frontiersin.org

Table 1. Cultural-AI communication framework components.

5 Emerging trends and future directions

Recent developments in Southeast Asian digital communication reveal several emerging trends that will likely shape the future intersection of AI technologies, cultural identity, and disinformation management. The increasing sophistication of culturally adapted AI systems represents perhaps the most significant development, with platforms beginning to implement region-specific algorithms that account for local communication patterns, religious practices, and cultural values.

The emergence of “hybrid masculinity” expressions in digital platforms, as documented in recent Indonesian media research (Sonni et al., 2025), demonstrates how traditional cultural categories are being reimagined through AI-mediated communication. This phenomenon extends beyond gender identity to encompass broader questions of how cultural authenticity can be maintained and expressed through increasingly algorithm-driven communication environments. The success of Indonesian hijabi entrepreneurs in leveraging AI for authentic religious identity expression (Putri and Sonni, 2025) suggests that future AI systems may need to move beyond one-size-fits-all approaches toward more nuanced, community-specific implementations.

Technological developments in natural language processing and computer vision are creating new possibilities for culturally sensitive AI implementations. Advanced models capable of understanding religious terminology, cultural metaphors, and community-specific communication patterns offer promise for developing AI systems that enhance rather than suppress cultural authenticity. However, these same technologies raise new concerns about cultural appropriation and the potential for AI systems to misrepresent or commodify sacred or culturally significant communication practices.

The evolution of disinformation tactics, as analyzed in studies of economic conspiracy theories (Sonni, 2025), reveals increasingly sophisticated approaches to exploiting cultural and religious identities for misinformation campaigns. These developments suggest that future AI-enhanced communication systems must simultaneously become more culturally sensitive while developing enhanced capabilities for detecting culturally targeted disinformation that exploits authentic community concerns and values.

Figure 2 illustrates the evolution of research priorities in cultural-AI communication studies, demonstrating a clear progression from basic technical solutions to more sophisticated, community-centered approaches informed by key studies in the field. The temporal analysis reveals increasing recognition that effective AI implementation in culturally diverse contexts requires moving beyond purely technical solutions to embrace holistic frameworks that integrate cultural values, community control, and ethical considerations, as evidenced by the progression from foundational work on algorithmic bias (Noble, 2018) to contemporary research on culturally integrated entrepreneurship systems (Putri and Sonni, 2025).

Figure 2
Timeline chart illustrating research focus areas from 2019 to 2025. Each period highlights specific topics: Algorithmic Fairness, Religious Identity, Platform Adaptations, and Integrated Systems. Key studies and trend indicators show research growth, notably a 300% increase in Cultural Sensitivity. A legend categorizes topics into Technical Foundation, Cultural Integration, Holistic Frameworks, and Convergence Models.

Figure 2. Evolution of cultural-AI communication research themes (2019–2024).

The increasing importance of community-controlled algorithms represents another significant trend with implications for combating disinformation while preserving cultural authenticity. Recent experiments in algorithmic sovereignty, where communities maintain greater control over how AI systems represent and moderate their cultural expression, show promise for addressing the tension between disinformation prevention and cultural suppression. These approaches align with broader movements toward decolonizing technology and ensuring that AI development serves diverse global communities rather than reinforcing dominant cultural hegemonies.

Regulatory developments across Southeast Asia indicate a growing recognition by governments of the need for culturally sensitive approaches to AI governance and digital communication regulation. The implications of information pandemic research (Surjatmodjo et al., 2024) suggest that state resilience increasingly depends on developing communication governance frameworks that account for cultural diversity and community autonomy rather than imposing top-down technical solutions that may inadvertently marginalize minority voices.

The integration of Islamic entrepreneurship frameworks with contemporary digital marketing practices, as demonstrated in research on hijabi entrepreneurs (Putri and Sonni, 2025), reveals potential pathways for developing culturally informed business communication strategies that leverage rather than suppress religious identity. This trend suggests broader possibilities for integrating diverse cultural and spiritual traditions with AI-enhanced communication technologies in ways that enhance rather than diminish authentic cultural expression.

6 Potential future developments

The trajectory of current research and technological development suggests several potential future directions for the intersection of AI-enhanced communication, cultural identity, and disinformation management in Southeast Asian contexts. The emergence of more sophisticated cultural AI models capable of understanding nuanced religious and ethnic communication patterns may enable new forms of authentic digital expression while simultaneously improving disinformation detection capabilities.

Building on findings from economic conspiracy theory research (Sonni, 2025), future research will likely focus on developing predictive models for cultural communication that can anticipate how different AI implementations will impact various communities before deployment. This proactive approach could help prevent the cultural marginalization that often accompanies new technology rollouts while ensuring that disinformation countermeasures do not inadvertently suppress legitimate cultural expression.

The success of platform-specific cultural adaptations documented in recent media studies (Sonni et al., 2025) suggests potential for developing AI systems that can automatically adjust their cultural sensitivity parameters based on platform context and community characteristics. Such adaptive systems could maintain consistent respect for cultural values while optimizing for the unique affordances and user expectations of different digital environments.

The integration of blockchain technologies with AI-enhanced communication systems presents intriguing possibilities for creating tamper-proof cultural communication records while enabling community verification of authentic cultural expression. Such systems could help distinguish between genuine cultural communication and culturally themed disinformation campaigns, addressing one of the most significant challenges in current content moderation approaches.

The development of culturally informed, generative AI systems represents another frontier with significant implications for authentic cultural expression and the management of disinformation. These systems could help communities create culturally appropriate content while assisting in identifying artificially generated disinformation that mimics cultural communication patterns, building on the sophisticated understanding of cultural identity construction demonstrated in entrepreneurship research (Putri and Sonni, 2025).

Future research on female entrepreneurship and cultural identity (Theresia et al., 2025) may inform the development of AI systems specifically designed to support minority entrepreneurs in navigating complex identity negotiations while building successful businesses. Such systems could provide culturally sensitive guidance for personal branding, market positioning, and community engagement that respects rather than compromises authentic cultural and religious expression.

7 Discussion

The synthesis of current research reveals that the intersection of AI-enhanced communication, cultural identity, and disinformation management represents a critical frontier for both theoretical development and practical implementation in Southeast Asian digital ecosystems. The evidence suggests that culturally sensitive AI implementations can simultaneously enhance authentic cultural expression and improve disinformation detection, challenging zero-sum assumptions about technology and cultural preservation.

The success of Indonesian hijabi entrepreneurs in leveraging AI for authentic expression of their religious identity, while achieving superior business outcomes (Putri and Sonni, 2025), demonstrates the practical viability of cultural-AI convergence approaches. These findings suggest that future AI development should prioritize cultural sensitivity not merely as an ethical consideration but as a pathway to improved system effectiveness and user engagement. The documented 34% increase in funding success rates and 58% improvement in market reach among entrepreneurs using culturally sensitive AI systems provide compelling evidence for the commercial viability of inclusive technology design.

The emergence of platform-specific cultural adaptations, as documented in recent studies of Indonesian reality shows and digital expressions of masculinity (Sonni et al., 2025), suggests that different technological environments necessitate distinct approaches to cultural integration. This finding has significant implications for developing AI systems that can adapt to various platform affordances while maintaining consistent respect for cultural values and authentic expression. The research reveals that TikTok’s 45% creative expression content, compared to television’s 65% traditional representations, illustrates how platform characteristics actively shape, rather than merely distribute, cultural content.

The documented effectiveness of community-centered AI design approaches in Southeast Asian contexts provides valuable insights for global AI development. The evidence suggests that meaningful community consultation during algorithm development phases can prevent discriminatory outcomes while enhancing system effectiveness, challenging purely technical approaches to AI fairness and bias mitigation. Research on information pandemic dynamics (Surjatmodjo et al., 2024) emphasizes that effective disinformation countermeasures must account for local communication patterns and community trust networks rather than applying universal technical solutions.

However, significant challenges remain in balancing the rapid pace of technological development with the careful cultural consultation required for authentic AI integration. The tension between scalable technical solutions and culturally specific implementations requires ongoing research and innovative approaches to algorithm development that can accommodate both efficiency and cultural sensitivity. The complexity of this challenge is illustrated by research on economic conspiracy theories (Sonni, 2025), which reveals how sophisticated disinformation campaigns can exploit authentic cultural concerns to spread false information, requiring AI systems that can distinguish between legitimate cultural expression and manipulative disinformation tactics.

The implications extend beyond technical considerations to encompass broader questions about digital sovereignty, cultural preservation, and the future of authentic human expression in increasingly AI-mediated communication environments. The Southeast Asian experience provides valuable lessons for other regions grappling with similar challenges of maintaining cultural authenticity while leveraging AI technologies for communication enhancement and disinformation management.

Current research gaps in longitudinal studies and cross-platform analysis limit our understanding of how culturally sensitive AI implementations evolve over time and their long-term impacts on community communication practices. Future research should prioritize these methodological challenges while developing more sophisticated frameworks for evaluating cultural AI effectiveness beyond traditional technical metrics.

The evidence supports the development of regulatory frameworks that acknowledge the complex relationship between AI technology, cultural expression, and disinformation management, suggesting that effective governance requires nuanced approaches that move beyond binary technical solutions toward a more sophisticated understanding of cultural-technological convergence. The work on state resilience and information pandemic management (Surjatmodjo et al., 2024) suggests that regulatory approaches must integrate cultural sensitivity with technical effectiveness to build sustainable digital communication governance systems.

8 Conclusion

This mini review reveals that the intersection of digital disinformation, AI-enhanced communication, and cultural identity construction in Southeast Asian startup ecosystems represents a critical area of scholarly and practical importance that demands continued theoretical development and empirical investigation. The synthesis of current literature demonstrates that culturally sensitive AI implementations can enhance rather than suppress authentic cultural expression while simultaneously improving disinformation detection capabilities, challenging conventional assumptions about the relationship between technological advancement and cultural preservation.

The emergence of successful cultural-AI convergence models, particularly in Indonesian entrepreneurial contexts (Putri and Sonni, 2025), provides empirical evidence that AI technologies can be designed and implemented to respect and amplify cultural diversity rather than homogenize communication practices toward dominant cultural norms. The documented success of hijabi entrepreneurs in leveraging culturally sensitive AI systems for authentic expression of their religious identity, while achieving superior business outcomes, suggests that cultural sensitivity in AI development represents not merely an ethical imperative but a pathway to improved system effectiveness and user satisfaction.

The development of the Cultural-AI Communication Convergence framework contributes to ongoing theoretical debates about algorithmic fairness, digital inclusion, and the future of authentic human expression in AI-mediated environments. This framework, informed by diverse research ranging from platform-specific cultural adaptations (Sonni et al., 2025) to disinformation dynamics (Sonni, 2025) and entrepreneurship success patterns (Theresia et al., 2025), emphasizes the importance of community-centered design approaches, continuous cultural feedback mechanisms, and algorithmic transparency in creating AI systems that serve diverse global communities while effectively combating disinformation.

Future research should prioritize longitudinal studies examining the evolution of culturally sensitive AI implementations over time, cross-platform analysis of cultural identity coordination strategies, and the development of predictive models to assess the impact of cultural AI before deployment. The methodological challenges identified in current scholarship require innovative approaches that integrate computational analysis with ethnographic methods to capture subtle cultural nuances that determine AI system effectiveness in diverse cultural contexts.

The regulatory implications, informed by research on information pandemic management and state resilience (Surjatmodjo et al., 2024), suggest the need for adaptive policy frameworks that balance technological innovation with cultural preservation and community autonomy. Southeast Asian governmental initiatives toward culturally sensitive AI governance provide valuable models for other regions seeking to develop inclusive technology policies that respect diverse cultural values while promoting digital economic development.

The evidence from entrepreneurship research (Theresia et al., 2025; Putri and Sonni, 2025) demonstrates that successful integration of cultural identity with digital business practices requires sophisticated understanding of both technological capabilities and cultural values. This finding has implications beyond individual entrepreneur success to encompass broader questions about inclusive economic development and the role of cultural diversity in fostering innovation and competitive advantage in global digital markets.

Ultimately, this review demonstrates that the future of AI-enhanced communication in culturally diverse societies depends on moving beyond purely technical solutions toward holistic frameworks that integrate cultural values, community control, and ethical considerations into the fundamental design and implementation of artificial intelligence systems. The Southeast Asian experience, as documented across multiple research streams from disinformation studies (Sonni, 2025) to platform analysis (Sonni et al., 2025) and entrepreneurship research (Putri and Sonni, 2025), provides crucial insights for developing AI technologies that enhance rather than diminish human cultural diversity while effectively addressing the critical challenge of digital disinformation in our increasingly connected world.

Author contributions

AS: Resources, Formal analysis, Visualization, Writing – original draft, Methodology, Software, Data curation, Investigation, Validation, Writing – review & editing, Conceptualization.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ahmad, N. H., Ramayah, T., Wilson, C., and Kummerow, L. (2010). Is entrepreneurial competency and business success relationship contingent upon business environment? Int. J. Entrep. Behav. Res. 16, 182–203. doi: 10.1108/13552551011042780

Crossref Full Text | Google Scholar

Ananny, M., and Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc. 20, 973–989. doi: 10.1177/1461444816676645,

PubMed Abstract | Crossref Full Text | Google Scholar

Benjamin, R. (2020). Race after technology: abolitionist tools for the New Jim Code. Cambridge, UK: Polity.

Google Scholar

Bradshaw, S., and Howard, P. N. (2019). The global disinformation order: 2019 global inventory of organised social media manipulation. Oxford Internet Inst. Available online at: https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/12/2019/09/CyberTroop-Report19.pdf (Accessed June 23, 2025).

Google Scholar

Buolamwini, J, and Gebru, T 2018 Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Proceedings of Machine Learning Research. Available online at: https://proceedings.mlr.press/v81/buolamwini18a.html (Accessed June 23, 2025).

Google Scholar

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press.

Google Scholar

Hao, K. 2019. This is how AI bias really happens—and why it’s so hard to fix. https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/ (Accessed June 12, 2025).

Google Scholar

Katzenbach, C., and Ulbricht, L. (2019). Algorithmic governance. Internet Policy Rev. 8, 1–18. doi: 10.14763/2019.4.1424

Crossref Full Text | Google Scholar

Kshetri, N. (2017). The evolution of the internet of things industry and market in China: an interplay of institutions, demands and supply. Telecommun. Policy 41, 49–67. doi: 10.1016/j.telpol.2016.11.002

Crossref Full Text | Google Scholar

Miller, D., Costa, E., Haynes, N., McDonald, T., Nicolescu, R., Sinanan, J., et al. (2016). How the World Changed Social Media, vol. 1. 1st Edn. London: UCL Press.

Google Scholar

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York, NY: NYU Press.

Google Scholar

Putri, V. C. C., and Sonni, A. F. (2025). AI-Driven Personal Branding for Female Entrepreneurs: The Indonesian Hijabi Startup Ecosystem. Journalism Media 6:131. doi: 10.3390/journalmedia6030131

Crossref Full Text | Google Scholar

Ramadani, V., Dana, L. P., Ratten, V., and Tahiri, S. (2015). The context of Islamic entrepreneurship and business: concept, principles and perspectives. Int. J. Bus. Globalisation 15:244. doi: 10.1504/ijbg.2015.071906

Crossref Full Text | Google Scholar

Sonni, A. F. (2025). Digital disinformation and financial decision-making: understanding the spread of economic conspiracy theories in Indonesia. Front Hum. Dyn. 7, 1–13. doi: 10.3389/fhumd.2025.1617919

Crossref Full Text | Google Scholar

Sonni, A. F., Putri, V. C. C., Akbar, M., and Irwanto, I. (2025). Platform-specific masculinities: the evolution of gender representation in indonesian reality shows across television and digital media. Journalism Media 6, 1–17. doi: 10.3390/journalmedia6010038

Crossref Full Text | Google Scholar

Surjatmodjo, D., Unde, A. A., Cangara, H., and Sonni, A. F. (2024). Information pandemic: a critical review of disinformation spread on social media and its implications for state resilience. Soc. Sci. 13, 418–418. doi: 10.3390/socsci13080418

Crossref Full Text | Google Scholar

Theresia, S., Sihombing, S. O., and Antonio, F. (2025). From effectuation to empowerment: unveiling the impact of women entrepreneurs on small and medium enterprises’ performance—evidence from Indonesia. Administrative Sci. 15, 1–28. doi: 10.3390/admsci15060198

Crossref Full Text | Google Scholar

van Dijck, J. (2024). The platform society: Public values in a connective world. Oxford: Oxford University Press.

Google Scholar

Vosoughi, S., Roy, D., and Aral, S. (2018). The spread of true and false news online. Science 359, 1146–1151. doi: 10.1126/science.aap9559,

PubMed Abstract | Crossref Full Text | Google Scholar

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York, NY: PublicAffairs.

Google Scholar

Keywords: artificial intelligence, cultural identity, digital disinformation, Islamic communication, Southeast Asia, startup ecosystems

Citation: Sonni AF (2026) A critical review of AI-enhanced communication strategies in Southeast Asian startup ecosystems. Front. Hum. Dyn. 7:1697293. doi: 10.3389/fhumd.2025.1697293

Received: 03 September 2025; Revised: 15 December 2025; Accepted: 30 December 2025;
Published: 13 January 2026.

Edited by:

Daisuke Akiba, The City University of New York, United States

Reviewed by:

Amr Assad, Higher Colleges of Technology, United Arab Emirates
Ajeng Illastria Rosalina, Indonesian Food and Drug Authority, Indonesia

Copyright © 2026 Sonni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alem Febri Sonni, YWxlbWZlYnJpc0B1bmhhcy5hYy5pZA==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.