Artificial intelligence, and in particular large language models (LLMs), is reshaping the field of psychotherapy by offering new opportunities for delivering mental health support and therapeutic interventions. The expanding presence of AI within clinical environments raises crucial questions concerning the safety, efficacy, transparency, and societal implications of these technologies. Recent studies have demonstrated the potential of LLMs to enhance both access and personalization of care, yet debates continue over their clinical reliability, the preservation of human values, and equitable distribution of benefits.
While technological capacity for LLMs has advanced rapidly, evidence supporting the long-term effectiveness, risks, and societal consequences of these systems in mental health care remains fragmented and incomplete. Moreover, issues of ethical oversight, policy harmonization, and public trust are at the forefront of academic and regulatory discussions, highlighting a pressing need for multidisciplinary investigation and robust frameworks to steer AI integration in psychotherapy. Beyond technical innovation, it is essential to build a clear understanding of both the real-world and potential impacts of these technologies on clinical practice, patient outcomes, and society.
This Research Topic aims to critically explore and shape the evolving landscape of AI in psychotherapy through a multidisciplinary lens that encompasses psychology, ethics, health economics, and governance. Central objectives include the development of rigorous evaluation criteria for AI-based therapeutic tools, the articulation of policy and ethical standards to safeguard patient well-being, and the formulation of governance mechanisms that align innovation with public health needs. The goal is to foster dialogue and knowledge exchange on the actual and potential impacts of AI in psychotherapy, in order to support the responsible and equitable deployment of AI systems that uphold safety, effectiveness, and fundamental human rights.
To gather further insights into the multidimensional challenges and opportunities surrounding AI in psychotherapy, we invite contributions that address but are not limited to the following themes:
• defining clinical and economic evaluation standards for AI-assisted psychotherapy tools • ethical frameworks and transparency in AI-driven mental health interventions • policy development, legal considerations, and international governance for AI use in healthcare • cross-disciplinary strategies for patient safety and clinician support • societal impacts, equity, and inclusion in AI-enabled mental health services • stakeholder engagement and public trust in AI-facilitated therapy • implications of AI-assisted interactions for therapeutic alliance and clinician roles.
We welcome original research articles, reviews, policy analyses, perspectives, and case studies as part of this Research Topic.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Registered Report
Review
Systematic Review
Technology and Code
Keywords: AI psychotherapy, LLMs, Ethics, Clinical evaluation, Governance, Equity and access
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.