The rapid emergence of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) is catalyzing a paradigm shift in mental health research and service delivery. While traditional AI approaches in psychiatry have primarily focused on discriminative tasks such as diagnostic classification, GenAI introduces new capacities for scalable human-like interaction, personalized content generation, and the synthesis of complex multimodal data (e.g., language, voice, facial expression, and movement). These advances may create opportunities to strengthen mental health promotion, prevention, and care provision at the population level—yet the field faces two key challenges: an epistemological gap in understanding how GenAI reshapes mental health theory and constructs (e.g., digital phenotypes and risk/protective factors), and an implementation gap in translating algorithms into robust, clinically and publicly validated, equitable, and governable tools for real-world settings.
This Research Topic aims to bridge these gaps by exploring the synergy between “Sensing” (capturing multimodal behavioral indicators relevant to mental health states and trajectories) and “Synthesis” (generating interactive clinical dialogues, simulations, and decision-support tools). By emphasizing both theoretical inquiry and applied public health relevance, we seek to move beyond model performance metrics toward frameworks that support trustworthy, interpretable, and scalable digital mental health approaches, including their implications for health systems, policy, and equity.
The primary objective of this Research Topic is to catalyze interdisciplinary work integrating AI engineering, psychiatric and behavioral science, and implementation/health services research. We aim to identify robust frameworks for population-level screening, monitoring, early identification, prevention-oriented interventions, and service delivery support, with a focus on external validation, representative samples where feasible, and real-world applicability. Key questions include: How does GenAI reshape our theoretical understanding of patient-clinician interactions? What are the best practices for developing AI-driven products that are both clinically effective and ethically sound? Which frameworks ensure that these tools are not just "black boxes" but interpretable, actionable components of the clinical workflow?
We welcome original research, perspectives (theoretical & methodological), systematic reviews (PRISMA-aligned where appropriate), and case studies focused on, but not limited to:
• Theoretical & Epistemological Perspectives: Philosophical and metascientific discussions on how GenAI changes the "biological-psychological-social" model, including the validity of "robot psychologists" and AI-mediated therapeutic alliances.
• GenAI-Enabled Multimodal Biomarkers: Advanced modeling of non-verbal behaviors, including micro-expressions, vocal features, and psychomotor markers, and their integration into generative diagnostic systems.
• Interactive Clinical Products & Simulation: Development and validation of AI-driven products such as "Standardized Simulated Patients" for medical training, pre-consultation assistants (triage bots), and AI-augmented CBT/ACT tools.
• Implementation Science & Clinical Workflow: Studies on the integration of GenAI products into hospital HIS systems, focusing on usability, "human-in-the-loop" protocols, and the democratization of psychological resources.
• Methodological Innovations with Public Health Relevance: Utilizing LLMs for qualitative data coding, synthetic data generation for rare disorders, and AI-powered psychometric tool optimization.
• Evaluation, Ethics & Governance: Benchmarking frameworks for "hallucination" risks in clinical contexts, bias mitigation in multimodal sensing, and the development of safety standards for unsupervised AI deployment.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Clinical Trial
Community Case Study
Conceptual Analysis
Curriculum, Instruction, and Pedagogy
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Clinical Trial
Community Case Study
Conceptual Analysis
Curriculum, Instruction, and Pedagogy
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Policy and Practice Reviews
Policy Brief
Review
Study Protocol
Systematic Review
Keywords: Generative AI, Large Language Models (LLMs), Digital Psychiatry, Computational Psychology, Psychotherapy Chatbots, Synthetic Data, Clinical Decision Support
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.