Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Psychiatry

Sec. Computational Psychiatry

Volume 16 - 2025 | doi: 10.3389/fpsyt.2025.1583739

This article is part of the Research TopicAdvancing Psychiatric Care through Computational Models: Diagnosis, Treatment, and PersonalizationView all 6 articles

Aligning Large Language Models for Cognitive Behavioral Therapy: A Proof-Of-Concept Study

Provisionally accepted
  • 1Yonsei University, Seoul, Republic of Korea
  • 2EverEx, Seoul, Republic of Korea
  • 3College of Medicine, Yonsei University, Seoul, Seoul, Republic of Korea

The final, formatted version of the article will be published soon.

Recent advancements in Large Language Models (LLMs) have significantly impacted society, particularly with their ability to generate responses in natural language. However, their application to psychotherapy is limited due to the challenge of aligning LLM behavior with clinically appropriate responses. In this paper, we introduce LLM4CBT, designed to provide psychotherapy by adhering to professional therapeutic strategies, specifically within the framework of Cognitive Behavioral Therapy (CBT). Our experimental results on real-world conversation data demonstrate that LLM4CBT aligns closely with the behavior of human expert therapists, exhibiting a higher frequency of desirable therapeutic behaviors compared to existing LLMs. Additionally, experiments on simulated conversation data show that LLM4CBT can effectively elicit automatic thoughts that patients unconsciously possess. Moreover, LLM4CBT is able to pause and wait until they are prepared to participate in the discussion for patients experiencing difficulty in engaging with the intervention, rather than continuously pressing with questions. The results demonstrate potential possibilities on designing LLM-based CBT therapists by aligning the model with appropriate instructions.

Keywords: cognitive behavior therapy, Large Language Model, artificial intelligence, Prompt, Alignment

Received: 26 Feb 2025; Accepted: 30 Jun 2025.

Copyright: © 2025 Kim, Choi, Cho, Sohn and Kim. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Jy-yong Sohn, Yonsei University, Seoul, Republic of Korea
Byung-Hoon Kim, EverEx, Seoul, Republic of Korea

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.