Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Artif. Intell.

Sec. Language and Computation

Volume 8 - 2025 | doi: 10.3389/frai.2025.1609097

An overview of model uncertainty and variability in LLM-based sentiment analysis. Challenges, mitigation strategies and the role of explainability

Provisionally accepted
David  Herrera-PoyatosDavid Herrera-Poyatos*Carlos  Peláez-GonzálezCarlos Peláez-GonzálezCristina  ZuherosCristina ZuherosAndrés  Herrera-PoyatosAndrés Herrera-PoyatosVirilo  TejedorVirilo TejedorFrancisco  HerreraFrancisco HerreraRosana  MontesRosana Montes
  • University of Granada, Granada, Spain

The final, formatted version of the article will be published soon.

Large Language Models (LLMs) have significantly advanced sentiment analysis, yet their inherent uncertainty and variability pose critical challenges to achieving reliable and consistent outcomes. This paper systematically explores the Model Variability Problem (MVP) in LLM-based sentiment analysis, characterized by inconsistent sentiment classification, polarization, and uncertainty arising from stochastic inference mechanisms, prompt sensitivity, and biases in training data. We present illustrative examples and two case studies to highlight its impact and analyze the core causes of MVP, discussing a dozen fundamental reasons for model variability. We pay especial atenttion to explainabily, with an analysis of its importance in LLMs from the MVP perspective.In addition, we investigate key challenges and mitigation strategies, paying particular attention to the role of temperature as a driver of output randomness and highlighting the crucial role of explainability in improving transparency and user trust. By providing a structured perspective on stability, reproducibility, and trustworthiness, this study helps develop more reliable, explainable, and robust sentiment analysis models, facilitating their deployment in high-risk domains such as finance, healthcare and policy making, among others.

Keywords: sentiment analysis, Large language models, uncertainty, model variability problem, LLM-based sentiment analysis

Received: 09 Apr 2025; Accepted: 19 Jul 2025.

Copyright: © 2025 Herrera-Poyatos, Peláez-González, Zuheros, Herrera-Poyatos, Tejedor, Herrera and Montes. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: David Herrera-Poyatos, University of Granada, Granada, Spain

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.