Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Psychiatry

Sec. Digital Mental Health

Volume 16 - 2025 | doi: 10.3389/fpsyt.2025.1665573

This article is part of the Research TopicApplication of chatbot Natural Language Processing models to psychotherapy and behavioral mood healthView all 15 articles

Evaluating Object Detection in Chatbot Interactions for Mental Health Assessment

Provisionally accepted
  • School of Electronic Information Engineering, Taiyuan University of Science and Technology,, Taiyuan, China

The final, formatted version of the article will be published soon.

The increasing integration of conversational agents into behavioral and mental health domains necessitates advanced evaluation methodologies tailored to the complexity of these interactions. Conventional metrics, including BLEU and ROUGE, are insufficient for assessing the nuanced and context-dependent nature of mental health dialogues, as they fail to account for persona alignment, therapeutic coherence, and contextual relevance. To overcome these limitations, this study introduces an innovative evaluation framework that combines formal task modeling, interlocutor-aware embedding, and knowledge-grounded reasoning. At the core of this framework lies the Interlocutor-Aware Latent Evaluation Network (IALENet), a dual-encoder architecture designed to capture dialogue quality within a shared latent semantic space that reflects interaction dynamics. IALENet integrates speaker-role embeddings and temporal attention mechanisms to extract evaluative features aligned with conversational fluency and therapeutic consistency. To enhance this, the Evaluation-Augmented Dialogue Alignment (EADA) strategy incorporates external knowledge bases and discourse priors, ensuring evaluations are calibrated to psychological expectations. By modeling response coherence, pragmatic relevance, and semantic fluency through structured augmentation and residual alignment, this approach produces interpretable and context-sensitive metrics that closely align with expert human assessments. Experimental results demonstrate that the proposed system achieves superior robustness and granularity across diverse chatbot-generated dialogue datasets in mental health settings,providing a scalable and theoretically grounded dialogue evaluation framework specifically designed for mental health chatbot interactions.

Keywords: Chatbot evaluation, Mental Health Dialogue, IALENet, eADA, Latent Semantic Modeling

Received: 14 Jul 2025; Accepted: 07 Oct 2025.

Copyright: © 2025 Sun. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Weilin Sun, okxe2811@outlook.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.