- 1School of Social Work, Marian College Kuttikkanam Autonomous, Kuttikkanam, India
- 2School of Social Work, Tata Institute of Social Sciences, Guwahati, India
The rapid expansion of Artificial Intelligence (AI) and digital platforms in mental health care has introduced promising tools for screening, triage, and psychoeducation. Yet, for individuals with intellectual disabilities (ID) - a heterogeneous group encompassing varying levels of cognitive functioning, communication abilities, and support needs- this technological shift has intensified rather than ameliorated pre-existing inequities. Intellectual disability includes individuals with mild to profound impairments in cognitive processing, often intersecting with limitations in adaptive behavior, verbal communication, and decision-making autonomy (1). These variations shape how individuals interact with and benefit from digital mental health systems. Despite growing discourse around inclusive design, the near-total absence of individuals with ID in digital mental health research and system architecture remains a paradigmatic failure (2). This exclusion is not a function of technological incapacity but stems from entrenched epistemological and clinical assumptions that erase cognitive variance as a legitimate form of mental health subjectivity. Addressing this omission demands a foundational reimagining of how digital mental health systems conceptualize intelligence, usability, and therapeutic engagement.
The prevailing dominance of the psychiatric-medical model in digital mental health development reinforces a reductive, pathologizing approach to intellectual disability (3, 4). Rooted in the notion of individual deficiency, this model underpins many algorithmic frameworks that normalize neurotypical patterns of cognition and emotional regulation. Consequently, digital interventions—from AI chatbots and self-screening tools to emotion-sensing technologies—often rely on decision trees and behavioral templates that render individuals with ID algorithmically invisible (5–7). These systems lack responsiveness to cognitive and communicative differences and often misinterpret divergent behaviors as dysfunction. While some efforts in digital mental health attempt to integrate social determinants and participatory frameworks, these remain peripheral to dominant logics that continue to prioritize normative constructs of cognition. This exclusion is embedded not only in technical models but in design infrastructures that assume linear logic, rational agency, and verbal fluency—traits aligned with neurotypical cognition. Technologies driven by natural language processing, adaptive learning, and emotion recognition frequently misconstrue non-linear cognition, atypical affect, and symbolic expression as errors. These systems reflect embedded norms of legibility and intelligibility, delegitimizing alternative ways of knowing, expressing, and relating. This aligns with broader critiques of epistemic oppression in disability scholarship, which identify academic gatekeeping and communicative inaccessibility as mechanisms of exclusion (8).
AI’s potential to enhance communication, education, and independence for people with ID is often acknowledged—particularly through personalized support, early diagnosis, and adaptive environments. However, these narratives are frequently framed within skill-based discourses that treat disability as a variable for technical adaptation rather than a standpoint for epistemological and design reconfiguration. Algorithmic fairness frameworks dominate equity discourses, yet intellectual disability is rarely included in fairness metrics or audit protocols, risking systems that reproduce structural exclusion beneath a veneer of inclusivity (9, 10). One critical vector of exclusion lies in the construction of training datasets. Mental health datasets often originate from normatively defined populations, systematically excluding individuals with ID. This leads to algorithmic misclassification or total omission. In high-stakes domains such as suicide risk prediction or digital phenotyping, such exclusion results in erroneous assessments or denial of services. These gaps reflect a self-reinforcing feedback loop: exclusion from data produces exclusion from care.
Standard research methodologies in digital mental health often preclude inclusion by relying on tools for consent, symptom tracking, and user feedback that presume verbal fluency and linear cognition, thereby excluding individuals who communicate symbolically, non-verbally, or with cognitive divergence. Quantitative methods value standardization over accessibility, while qualitative methods rarely adjust to include co-production with individuals with ID. Ethical review boards frequently label these individuals as “inherently vulnerable,” enacting protective exclusions that erase agency and perpetuate epistemic injustice. The medical model’s entrenchment also persists in mainstream AI ethics and policy frameworks. Instruments like the EU AI Act and UNESCO’s Ethical AI Recommendations prioritize transparency and risk management but omit mandates for cognitive diversity or participatory governance (11–13). As such, cognitive justice—defined here as the equitable recognition and integration of diverse ways of knowing—remains largely absent from regulatory landscapes.
Alternative frameworks offer critical correctives. Disability justice emphasizes intersectional leadership and challenges exclusionary knowledge production. Participatory and disability-led design promotes co-creation through accessible modalities. Disability data justice views data infrastructures as sites of struggle, advocating for community-defined priorities. The anti-ableism framework urges developers to recognize disability as a central identity category. DIKWP’s semantic justice model bridges cognitive diversity and legal reasoning, advancing inclusive algorithmic jurisprudence. These frameworks, while promising, remain marginal in practice. Their integration into high-risk AI deployments requires enforceable mandates: cognitive accessibility standards, participatory design protocols, and metrics for epistemic inclusion—defined here as the deliberate integration of diverse cognitive, communicative, and experiential knowledge systems into the full life cycle of digital mental health technologies, from design to governance. Ethics curricula must address cognitive diversity, disability justice, and capability-informed evaluation. Policymaker-academic collaborations with grassroots organizations are essential to institutionalize these approaches.
A comparative analysis of digital mental health platforms highlights the consequences of epistemic exclusion and the emerging potential for inclusive redesign. Many mainstream tools are structured around neurotypical patterns of communication and interaction, which may not align with the cognitive preferences, expressive modes, and interpretive frameworks of users with intellectual disabilities (14–16). However, there is currently no robust empirical data to confirm or refute this misalignment, highlighting a significant gap in our understanding of how digital mental health systems interact with cognitive diversity. This evidence gap complicates our understanding of whether digital mental health tools are therapeutically effective, accessible, or even safe for individuals with intellectual disabilities. Broader accessibility concerns also persist, as many mental health apps remain unevaluated for use by disabled populations, despite claims of expanding access (17). These concerns collectively underscore the need for a unified participatory design approach—one that centers people with intellectual disabilities throughout development and evaluation, and incorporates their lived experiences, communicative strategies, and cognitive capacities across age groups and clinical settings (16, 18, 19).
Emotion AI systems misclassify atypical affect; accessible games overwhelm users with cognitive processing limitations; symbolic or minimally verbal users are excluded from platforms like Guremintza and Wikibase (20–22). Stress-regulation apps often presume autonomous navigation and structured comprehension. Encouragingly, efforts like iterative playtesting, pictorial UIs, semantic navigation tools, and iconographic dashboards reveal the feasibility of inclusive co-design. Microsoft’s AI for Accessibility initiative and the UK-based AbleChat pilot signal early but promising shifts from symbolic inclusion to structural participation (23, 24).
However, in low- and middle-income countries (LMICs), the stakes are magnified. Imported AI tools built on Western cognitive norms often invalidate local caregiving practices, knowledge systems, and expressions of distress. Without contextual adaptation, such tools operate as instruments of digital colonialism, perpetuating cognitive and cultural hierarchies. For example, in several rural areas across South Asia and Sub-Saharan Africa, digital mental health apps are accessed through shared mobile phones with limited data connectivity, often controlled by caregivers or community health workers. These conditions not only limit individual autonomy but also challenge the feasibility of consistent therapeutic engagement without offline functionality or adaptive, low-bandwidth interfaces. To counter these realities, coordinated and context-sensitive reforms are necessary. Procurement systems must mandate cognitive accessibility audits and integrate local disability organizations in platform evaluations. Funding bodies should require multi-phase co-design protocols, enabling the full participation of individuals with ID through Easy Read materials, pictorial tools, and supported decision-making. Governance bodies must ensure representation through permanent ID-affiliated seats. Implementation strategies must address infrastructural disparities with offline functionality, shared-device access, and community-based digital facilitation. Academic institutions should lead the formation of interdisciplinary hubs focused on disability ethics, inclusive AI design, and participatory research to ensure sustained innovation.
This article calls for interdisciplinary scholars, technologists, clinicians, disability advocates, and policymakers to radically reimagine digital mental health as a site for epistemic reconstruction rather than retrofitted inclusion. It argues that intellectual disability must be elevated from the periphery of accessibility compliance to the center of digital mental health design, governance, and evaluation. While recognizing the limitations inherent in an opinion-based article—including the absence of new empirical data and the reliance on selective case illustrations—this piece aims to lay a conceptual foundation for future inquiry and participatory research agendas. As an actionable starting point, we propose a “minimum standards” checklist for digital mental health systems: (1) mandatory cognitive accessibility audits during procurement; (2) inclusion of individuals with ID in training datasets; (3) deployment of supported decision-making tools and Easy Read formats; (4) co-design protocols with ID stakeholders; and (5) governance structures that guarantee representation of individuals with intellectual disabilities through permanent advisory roles. This requires moving beyond symbolic gestures and implementing a bold, empirically driven research agenda that challenges entrenched norms, disrupts dominant frameworks, and embeds cognitive justice as a foundational principle across the life cycle of technological innovation. The field must commit to structurally embedding the lived experiences, communication modes, and relational epistemologies of people with intellectual disabilities into the infrastructures, algorithms, and oversight systems that define AI-driven mental health care. This transformative agenda should be anchored in sustained collaboration across psychiatry, cognitive disability studies, human-computer interaction, AI and machine learning, bioethics, implementation science, and health economics to co-create systems not merely inclusive of, but co-authored by, people with intellectual disabilities. Priority areas for research include the development of representative data infrastructures that capture cognitive variance; the institutionalization of cognitive-justice evaluation metrics that supplement conventional AI benchmarks; the prototyping and trialing of inclusive interfaces designed through multi-site, adaptive methodologies; the reform of governance structures to ensure representation, accountability, and accessibility; and the contextual localization of digital interventions in low- and middle-income countries, where infrastructural and epistemic inequities compound. If executed with rigor, transparency, and disability-led co-production, this research program can dismantle the prevailing logic of normative accommodation and replace it with a paradigm of structural co-creation—yielding digital mental health systems that are not only technically robust and clinically effective, but epistemically inclusive and socially transformative.
Author contributions
AB: Conceptualization, Formal Analysis, Writing – original draft, Writing – review & editing, Data curation, Software. AJ: Conceptualization, Writing – review & editing, Writing – original draft, Data curation.
Funding
The author(s) declare that no financial support was received for the research and/or publication of this article.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declare that no Generative AI was used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1. Ouellette-Kuntz H, Minnes P, Garcin N, Martin C, Lewis MES, and Holden JJA. Addressing health disparities through promoting equity for individuals with intellectual disability. Can J Public Health. (2005) 96:S8–S22. doi: 10.1007/bf03403699
2. Phuong J, Ordóñez P, Cao J, Moukheiber M, Moukheiber L, Caspi A, et al. Telehealth and digital health innovations: A mixed landscape of access. PloS Digital Health. (2023) 2:e0000401. doi: 10.1371/journal.pdig.0000401
3. D’Alfonso S. AI in mental health [Review of AI in mental health. Curr Opin Psychol. (2020) 36:112. doi: 10.1016/j.copsyc.2020.04.005
4. Sun J, Dong Q, Wang S-W, Zheng Y, Liu X, Lu T, et al. Artificial intelligence in psychiatry research, diagnosis, and therapy. Asian J Psychiatry. (2023) 87:103705. doi: 10.1016/j.ajp.2023.103705
5. Devaram S. Empathic chatbot: emotional intelligence for empathic chatbot: emotional intelligence for mental health well-being. arXiv (Cornell University). (2020). doi: 10.48550/arXiv.2012.09130
6. Jovanovic M, Jevremović A, and Pejović-Milovančević M. Intelligent interactive technologies for mental health and well-being. In: Studies in computational intelligence. Springer Nature (2021). p. 331. doi: 10.1007/978-3-030-72711-6_18
7. Rizvi N, Smith T, Vidyala T, Bolds M, Strickland H, and Begel A. ““I hadn’t thought about that”: Creators of human-like Al weigh in on ethics & neurodivergence,” In: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, (New York, NY, USA: ACM). (2025), 3385–99. doi: 10.1145/3715275.373221
8. Monteleone R. Complexity as epistemic oppression: writing people with intellectual disabilities back into philosophical conversations. Hypatia. (2023) 38:746. doi: 10.1017/hyp.2023.85
9. Bennett CL and Keyes O. What is the point of fairness? Disability, AI and the complexity of justice. arXiv (Cornell University). (2019). doi: 10.48550/arxiv.1908.01024
10. Newman-Griffis D, Rauchberg JS, Alharbi R, Hickman L, and Hochheiser H. Definition drives design: Disability models and mechanisms of bias in AI technologies. First Monday. (2023). doi: 10.5210/fm.v28i1.12903
11. Maccaro A, Stokes K, Statham L, He L, Williams AL, Pecchia L, et al. Clearing the fog: A scoping literature review on the ethical issues surrounding artificial intelligence-based medical devices. J Personalized Med. (2024) 14:443. doi: 10.3390/jpm14050443
12. Nasir S, Khan RA, and Bai S. Ethical framework for harnessing the power of AI in healthcare and beyond. IEEE Access. (2024) 12:31014. doi: 10.1109/access.2024.3369912
13. Trunk A, Birkel H, and Hartmann E. On the current state of combining human and artificial intelligence for strategic organizational decision making. BuR - Business Res. (2020) 13:875. doi: 10.1007/s40685-020-00133-x
14. Sánchez MM, Casado A, Blanco LS, and García AMF. Chatbot, as educational and inclusive tool for people with intellectual disabilities. Sustainability. (2022) 14:1520. doi: 10.3390/su14031520
15. Sanjeewa R, Iyer R, Apputhurai P, Wickramasinghe N, and Meyer D. Systematic review of empathic conversational agent platform designs and their evaluation in the context of mental health. JMIR Ment Health. (2024) 11:e58974. doi: 10.2196/58974
16. Woodward K, Kanjo E, Brown DJ, McGinnity TM, and Harold GT. In the hands of users with intellectual disabilities: co-designing tangible user interfaces for mental wellbeing. Pers Ubiquitous Computing. (2023) 27:2171. doi: 10.1007/s00779-023-01752-x
17. Bunyi J, Ringland KE, and Schueller SM. Accessibility and digital mental health: Considerations for more accessible and equitable mental health apps. Front Digital Health. (2021) 3:742196. doi: 10.3389/fdgth.2021.742196
18. McDonald K, Gibbons CM, Conroy NE, and Olick RS. Facilitating the inclusion of adults with intellectual disability as direct respondents in research: Strategies for fostering trust, respect, accessibility and engagement. J Appl Res Intellectual Disabil. (2021) 35:170. doi: 10.1111/jar.12936
19. Moreno L, Petrie H, Martínez P, and Alarcón R. Designing user interfaces for content simplification aimed at people with cognitive impairments. Universal Access Inf Soc. (2023) 23:99. doi: 10.1007/s10209-023-00986-z
20. Jett J, Sacchi S, Lee JH, and Clarke RI. A conceptual model for video games and interactive media. J Assoc Inf Sci Technol. (2015) 67:505. doi: 10.1002/asi.23409
21. Mantello P and Ho TM. Why we need to be weary of emotional AI. AI Soc. (2022) 39:1447. doi: 10.1007/s00146-022-01576-y
22. Weber-Guskar E. How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners. Ethics Inf Technol. (2021) 23:601. doi: 10.1007/s10676-021-09598-8
23. Clarkson PJ, Keates S, Coleman R, and Lebbon C. Inclusive design: design for the whole population, 2003rd Edn. eds. Clarkson J, Keates S, Coleman R, and Lebbon C. Springer eBooks. Guildford, England: Springer (2003). doi: 10.1007/978-1-4471-0001-0
24. Thompson J, Martinez JJ, Sarikaya A, Cutrell E, and Lee B. “Chart reader: accessible visualization experiences designed with screen reader users,” In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, (New York, YS, USA: ACM) (2023), 1–18. doi: 10.1145/3544548.3581186
Keywords: intellectual disability, digital mental health, cognitive justice, algorithmic exclusion, AI ethics, participatory design, disability justice, epistemic inclusion
Citation: Babu A and Joseph AP (2025) Repositioning intellectual disability in the ethics of digital mental health technologies. Front. Psychiatry 16:1691940. doi: 10.3389/fpsyt.2025.1691940
Received: 24 August 2025; Accepted: 01 October 2025;
Published: 21 October 2025.
Edited by:
Kerim M. Munir, Boston Children’s Hospital, United StatesReviewed by:
Medard Kofi Adu, Dalhousie University, CanadaCopyright © 2025 Babu and Joseph. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Anithamol Babu, YW5pdGhhLm1vbC5iYWJ1QGdtYWlsLmNvbQ==; Akhil P. Joseph, YWtoaWwuam9zZXBoQHJlcy5jaHJpc3R1bml2ZXJzaXR5Lmlu