Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Neuroergonomics

Sec. Neurotechnology and Systems Neuroergonomics

Volume 6 - 2025 | doi: 10.3389/fnrgo.2025.1589734

This article is part of the Research TopicInsights from the 5th International Neuroergonomics ConferenceView all 9 articles

Towards Neuroadaptive Chatbots: A Feasibility Study

Provisionally accepted
  • 1Brandenburg University of Technology Cottbus-Senftenberg, Senftenberg, Germany
  • 2Zander Laboratories GmbH, Cottbus, Germany

The final, formatted version of the article will be published soon.

Introduction: Large-language models (LLMs) are transforming most industries today and are set to become a cornerstone of the human digital experience. While integrating explicit human feedback into the training and development of LLM-based chatbots has been integral to the progress we see nowadays, more work is needed to understand how to best align them with human values. Implicit human feedback enabled by passive brain-computer interfaces (pBCIs) could potentially help unlock the hidden nuance of users’ cognitive and affective states during interaction with chatbots. This study proposes an investigation on the feasibility of using pBCIs to decode mental states in reaction to text stimuli, to lay the groundwork for neuroadaptive chatbots. Methods: Two paradigms were created to elicit moral judgment and error-processing with text stimuli. Electroencephalography (EEG) data was recorded with 64 gel electrodes while participants completed reading tasks. Mental state classifiers were obtained in an offline manner with a windowed-means approach and linear discriminant analysis (LDA) for full-component and brain-component data. The corresponding event-related potentials (ERPs) were visually inspected. Results: Moral salience was successfully decoded at a single-trial level, with an average calibration accuracy of 78% on the basis of a data window of 600ms. Subsequent classifiers were not able to distinguish moral judgment congruence (i.e. moral agreement) and incongruence (i.e. moral disagreement). Error processing in reaction to factual inaccuracy was decoded with an average calibration accuracy of 66%. The identified ERPs for the investigated mental states partly aligned with other findings. Discussion: With this study, we demonstrate the feasibility of using pBCIs to distinguish mental states from readers’ brain data at a single-trial level. More work is needed to transition from offline to online investigations and to understand if reliable pBCI classifiers can also be obtained in less controlled language tasks and more realistic chatbot interactions. Our work marks preliminary steps for understanding and making use of neural-based implicit human feedback for LLM alignment.

Keywords: passive brain-computer interfaces, pBCI, LLM, error-processing, moral judgment

Received: 07 Mar 2025; Accepted: 11 Aug 2025.

Copyright: © 2025 Gherman and Zander. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Diana E Gherman, Brandenburg University of Technology Cottbus-Senftenberg, Senftenberg, Germany

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.