ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Natural Language Processing
Volume 8 - 2025 | doi: 10.3389/frai.2025.1556557
A Hybrid Approach for Multi-Class Multi-Label Emotion Recognition in Persian: Stacking Pre-trained LLMs with Small Transformer Models
Provisionally accepted- 1Tarbiat Modares University, Tehran, Iran
- 2Islamic Azad University of Arak, Arak, Markazi, Iran
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Emotion recognition from text is a crucial task in natural language processing, offering diverse applications across various domains. While substantial progress has been made in English, there remains a pressing need for robust and precise models tailored to emotion analysis in Persian, a language spoken by millions globally. This study introduces a hybrid model, termed SE_LLM_ST, which leverages recent advancements in large language models (LLMs) and transformer-based architectures. SE_LLM_ST functions as a stacking ensemble, integrating pre-trained LLMs with smaller transformer models to effectively capture both contextual and sequential information vital for Persian emotion recognition.To address the imbalanced class distributions inherent in our dataset, we propose a novel training strategy that combines binary cross-entropy and Tversky loss functions. Our analysis utilizes the publicly available 'EmoPars' dataset, specifically designed for multi-class multi-label tasks, where each text can express one or more of six distinct emotions: anger, fear, happiness, hatred, sadness, and wonder. The multi-class multilabel nature of this task presents unique challenges that we aim to overcome through our proposed approach. The SE_LLM_ST model demonstrates an impressive average F1 score of 76.4% across all classes, accurately classifying instances with multiple emotions. These results underscore the efficacy of integrating pre-trained LLMs with smaller transformer models within a stacking ensemble framework for emotion analysis in Persian texts
Keywords: natural language processing (NLP), machine learning, sentiment analysis, Large Language Models (LLMs), deep learning
Received: 09 Jan 2025; Accepted: 27 May 2025.
Copyright: © 2025 Khatibi and Farahani. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Toktam Khatibi, Tarbiat Modares University, Tehran, Iran
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.