ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Medicine and Public Health
This article is part of the Research TopicApplication of chatbot Natural Language Processing models to psychotherapy and behavioral mood healthView all 17 articles
Providers of Relief in Distress: RAG-based LLMs as Situation and Intent-Aware Assistants
Provisionally accepted- Iowa State University, Ames, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
In high-stress humanitarian and mental health contexts, timely access to accurate, empathetic, and actionable information remains critically limited, especially for at-risk and underserved populations. This work introduces LLooMi, an open-source, retrieval-augmented generation (RAG) conversational agent designed to deliver trustworthy, emotionally attuned, and context-aware support across domains such as mental health crises, housing insecurity, medical emergencies, immigration, and food access. Leveraging large language models (LLMs) with structured prompting, LLooMi reformulates user queries into actionable intents, often implicit, emotionally charged, or vague. It then retrieves and grounds responses in a curated, domain-specific knowledge base, without storing personal user data, aligning with privacy-preserving and ethical AI design principles. LLooMi adopts an intent-aware architecture that adapts its tone, content, and level of detail based on the user's inferred psychological state and informational goals. This step enables delivering fast, directive responses in acute distress scenarios or longer, validation-oriented support when emotional reassurance is needed, emulating key facets of therapeutic communication. By integrating NLP-driven semantic retrieval, structured dialogue memory, and emotionally adaptive generation, LLooMi offers a novel approach to scalable, human-centered digital mental health interventions. Evaluation shows an average answer correctness (AC) of 92.4% and answer relevancy (AR) of 84.9%, with high scores in readability, perceived trust, and ease of use. These results suggest LLooMi's potential as a complementary NLP-based tool for mental health support in digital psychiatry and crisis care.
Keywords: AI agents, Generative AI, Health assistant, LLMS, machine learning, Retrieval-Augmented Generation
Received: 24 Sep 2025; Accepted: 04 Feb 2026.
Copyright: © 2026 Nazar, Norman, Northway, Toutoungi, Zatkalik, Carlson, Sabado, Shawa and Selim. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Mohamed Y. Selim
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
