Your new experience awaits. Try the new design now and help us make it even better

OPINION article

Front. Psychiatry

Sec. Autism

Volume 16 - 2025 | doi: 10.3389/fpsyt.2025.1611101

Redefining Communication in Mental Healthcare: Generative AI for Neurodivergent Equity and Non-Verbal Autistic Inclusion

Provisionally accepted
  • 1Marian College Kuttikkanam Autonomous, Kuttikkanam, Kerala, India
  • 2Tata Institute of Social Sciences Guwahati Off-Campus, Jalukbari, India

The final, formatted version of the article will be published soon.

). This lack of representational justice reveals a deep theoretical and practical gap that cannot be resolved through technological enhancement alone-it requires a paradigmatic shift in how communication, personhood, and therapeutic alliance are conceptualized. Unless research and clinical paradigms are expanded to legitimize and respond to non-verbal modes of expression, non-verbal autistic individuals will remain excluded from both the epistemological foundations and practical realities of equitable mental healthcare.Generative artificial intelligence (AI) presents a timely and transformative intervention to reconfigure the communicative foundations and relational dynamics of mental healthcare by addressing the expressive gaps that persistently marginalize non-verbal autistic individuals.Unlike conventional tools that prioritize speech interpretation, generative AI systemsparticularly those integrating natural language processing and multimodal affective recognition-can process complex, context-rich inputs such as facial micro-expressions, vocal tones, gaze patterns, repetitive movement, autonomic responses, and environmental cues, thereby generating outputs that translate non-verbal emotional states into coherent, relationally relevant messages (Ballesteros et al., 2024;Solis-Arrazola et al., 2025). This is not about reducing neurodivergent communication into neurotypical language; rather, it is about affirming its validity and rendering it legible within therapeutic frameworks. Current AIaugmented systems like Affectiva's emotion analytics, Cognoa's pediatric behavioral health platforms, and increasingly adaptive AAC tools suggest early but promising directions in this regard (Kulke et al., 2020;O'Leary et al., 2024). For example, in overstimulating environments such as classrooms or clinics, AI-enabled wearables could detect physiological and behavioral stress markers and generate real-time alerts like "I feel overwhelmed and need a break," allowing clinicians or educators to respond promptly and empathetically. These tools can deescalate distress, reduce misinterpretation, and promote communicative trust. While preliminary evidence supports the use of AI in ASD-related contexts, empirical research specifically examining generative AI applications for non-verbal autistic individuals remains limited but emergent, warranting focused scholarly attention and investment. However, it is crucial to emphasize that generative AI cannot and should not be expected to replace human empathy, clinical judgment, or the relational nuance of therapeutic alliance. It cannot fully interpret subjective experience, guarantee cultural sensitivity without intentional design, or autonomously validate emotional truths. Rather than a panacea or autonomous agent of care, generative AI must be positioned as a relational co-participant-an interpretive scaffold that supports, but never substitutes, the human work of listening, validating, and building trust.Recognizing its limits is essential to avoiding technological idealism and ensuring that its integration enhances rather than distorts the therapeutic process.The adaptive capacity of generative AI offers transformative potential to reconfigure how mental healthcare systems engage with the diverse communicative realities of non-verbal autistic individuals-across developmental stages, intersecting identities, and care contextsby translating nuanced, embodied expressions into therapeutic insight without collapsing their meaning into neurotypical norms (Iannone & Giansanti, 2023). From preverbal toddlers requiring early detection of sensory distress and communication delays, to school-age children needing support in emotion identification and social interaction, AI-integrated platforms can offer real-time feedback, visual prompts, and personalized engagement in classrooms and therapeutic environments (Xing, 2024). For adolescents managing developmental shifts, peer dynamics, and emerging self-concepts, generative AI could enable safe spaces for emotional journaling, avatar-based role-play, or voice-assisted storytelling-tools that empower identity exploration while mitigating communicative alienation. For adults, especially those at the intersections of autism, intellectual disability, trauma, and gender nonconformity, AI can provide continuous affective tracking, adaptive stress response strategies, and relational continuity in both institutional and home-based care (Fu et al., 2025).Equally important is the capacity of generative AI to attend to the sociopolitical dimensions of communication by embedding intersectional insights into its interpretive models (Hohenstein et al., 2023). Whether shaped by language barriers, racialized diagnostic disparities, classbased access limitations, or culturally situated expressions of affect, AI tools must be designed to engage with these contextual nuances, thereby avoiding the homogenization of neurodivergent expression. When attuned to these factors, generative AI can function as both an interpretive aid and a structural corrector, disrupting entrenched speech-dominant paradigms that constrain therapeutic engagement and marginalize non-verbal communicative forms. Its strength lies in its adaptability: systems can evolve in response to user-specific patterns, culturally embedded meanings, and age-appropriate forms of expression, rendering care more responsive, ethical, and inclusive (Dai et al., 2025). To frame generative AI merely as an assistive communication tool is to overlook its revolutionary potential as a platform for epistemic justice, relational equity, and clinical transformation across the lifespan.The development, deployment, and validation of AI tools within mental healthcare must be guided by the lived expertise of neurodivergent communities, supported by interdisciplinary collaboration across clinical, technological, and social domains. Participatory design processes must actively involve autistic individuals, caregivers, AAC users, and clinicians to co-create systems that reflect real-world communicative practices and emotional priorities. Pilot trials must go beyond traditional efficacy metrics to capture subjective measures such as emotional resonance, perceived respect, empowerment, and trust in therapeutic relationships. Research methodologies should include multimodal validation strategies that combine quantitative metrics-such as physiological biomarkers and behavioral indicators-with qualitative ethnographies, narrative interviews, and community-led data interpretation. Crucially, AI systems must be trained to recognize affect not as a decontextualized output but as a relational, ecologically situated signal shaped by personal history, environment, and cultural norms.Furthermore, AI-generated insights must be treated not as definitive clinical verdicts but as provisional and dialogic prompts that invite further interpretation, discussion, and negotiation.While the ethical potential of generative AI is foregrounded throughout this discourse, its deployment also raises profound risks, including the normalization of surveillance, clinical coercion, algorithmic misjudgment, and the commodification of affective data. These threats are especially acute in under-resourced or overly institutionalized environments, where AI may reinforce systemic hierarchies rather than subvert them. To ensure equitable integration, a phased, justice-oriented implementation strategy must guide its development. This must include a clear delineation of potential harms such as data misuse, behavioral over-surveillance, and the uncritical pathologization of AI-interpreted signals. Safeguards-such as algorithmic auditing, data minimization, neurodivergent oversight, and real-time interpretive feedback loops-must be implemented to counterbalance these risks and uphold ethical integrity. In Phase One, participatory co-design with non-verbal autistic individuals, caregivers, and neurodivergent clinicians is essential to produce culturally grounded, developmentally responsive prototypes. Ethical safeguards-such as dynamic, ongoing consent mechanisms, transparent data governance, and community-led oversight boards-must be embedded from inception, particularly across both developed and developing countries. In Phase Two, these systems should undergo multi-site testing in classrooms, clinics, and community settings using mixed-method designs that assess clinical accuracy, emotional attunement, and relational trust. This research must directly engage with complex and unresolved questions: How can informed consent be meaningfully operationalized when communication is technologically mediated, embodied, and non-verbal? What methodological strategies ensure ecological and cultural validity when interpreting multimodal affective data across linguistically diverse, socioeconomically stratified, and geopolitically varied healthcare systems? What safeguards are needed to prevent algorithmic outputs from reinforcing diagnostic biases, intensifying global health inequities, or flattening culturally specific communicative practices? These questions are especially urgent for non-verbal autistic populations who remain excluded from dominant speech-centered diagnostic frameworks and therapeutic paradigms. Addressing these challenges requires robust integration of affective computing, human-computer interaction, disability studies, neuroethics, communication theories, implementation science, and lived experiences research across regional, linguistic, and cultural contexts. Developing multi-phase longitudinal studies, AI-assisted therapeutic prototypes grounded in ethnographic insight, and comparative global policy frameworks is critical not only for building an internationally credible foundation for equitable AI deployment, but also for ensuring that non-verbal autistic individuals are no longer treated as invisible subjects within mental healthcare systems. This effort represents a research and justice mandate-one that challenges dominant epistemologies while advancing neurodivergent expression within a culturally responsive and ethically grounded mental healthcare paradigm.

Keywords: Generative AI, non-verbal autism, neurodivergent communication, Ethical AI Integration, Mental healthcare equity

Received: 13 Apr 2025; Accepted: 29 Jul 2025.

Copyright: © 2025 Joseph and Babu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Akhil P. Joseph, Marian College Kuttikkanam Autonomous, Kuttikkanam, 685531, Kerala, India
Anithamol Babu, Marian College Kuttikkanam Autonomous, Kuttikkanam, 685531, Kerala, India

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.