ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. Natural Language Processing

Volume 8 - 2025 | doi: 10.3389/frai.2025.1564828

ExDoRA: Enhancing transferability of large language models for depression detection using free-text explanations

Provisionally accepted
  • Kyoto University of Advanced Science (KUAS), Kyoto, Japan

The final, formatted version of the article will be published soon.

Few-shot prompting in large language models (LLMs) significantly improves performance across various tasks, including both in-domain and previously unseen natural language tasks, by learning from limited in-context examples. How these examples enhance transferability and contribute to achieving state-of-the-art (SOTA) performance in downstream tasks remains unclear. To address this, we propose ExDoRA, a novel LLM transferability framework designed to clarify the selection of the most relevant examples using synthetic free-text explanations. Our novel hybrid method ranks LLM-generated explanations by selecting the most semantically relevant examples closest to the input query while balancing diversity. The top-ranked explanations, along with few-shot examples, are then used to enhance LLMs' knowledge transfer in multi-party conversational modelling for previously unseen depression detection tasks. Evaluations using the IMHI corpus demonstrate that ExDoRA consistently produces high-quality free-text explanations. Extensive experiments on depression detection tasks, including depressed utterance classification (DUC) and depressed speaker identification (DSI), show that ExDoRA achieves SOTA performance. The evaluation results indicate significant improvements, with up to 20.59% in recall for DUC and 21.58% in F1 scores for DSI, using 5-shot examples with top-ranked explanations in the RSDD and eRisk 18 T2 corpora. These findings underscore ExDoRA's potential as an effective screening tool for digital mental health applications.

Keywords: LLM transferability, in-context learning, free-text explanations, Prompt Engineering, digital mental health, Natural Language Processing

Received: 27 Jan 2025; Accepted: 28 Apr 2025.

Copyright: © 2025 Priyadarshana, Liang and Piumarta. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Y.H.P.P Priyadarshana, Kyoto University of Advanced Science (KUAS), Kyoto, Japan

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.