AUTHOR=Gehrke Lukas , Koselevs Aleksandrs , Klug Marius , Gramann Klaus TITLE=Neuroadaptive haptics: a proof-of-concept comparing reinforcement learning from explicit ratings and neural signals for adaptive XR systems JOURNAL=Frontiers in Virtual Reality VOLUME=Volume 6 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2025.1616442 DOI=10.3389/frvir.2025.1616442 ISSN=2673-4192 ABSTRACT=IntroductionNeuroadaptive technology provides a promising path to enhancing immersive extended reality (XR) experiences by dynamically tuning multisensory feedback to user preferences. This study introduces a novel system employing reinforcement learning (RL) to adapt haptic rendering in XR environments based on user feedback derived either explicitly from user ratings or implicitly from neural signals measured via Electroencephalography (EEG).MethodsParticipants interacted with virtual objects in a VR environment and rated their experience using a traditional questionnaire while their EEG data were recorded. Then, in two RL conditions, an RL agent tried to tune the haptics to the user — learning either on the rewards from explicit ratings, or on implicit neural signals decoded from EEG.ResultsThe neural decoder achieved a mean F1 score of 0.8, supporting informative yet noisy classification. Exploratory analyses revealed instability in the RL agent’s behavior in both explicit and implicit feedback conditions.DiscussionA limited number of interaction steps likely constrained exploration and contributed to convergence instability. Revisiting the interaction design to support more frequent sampling may improve robustness to EEG noise and mitigate drifts in subjective experience. By demonstrating RL‐based adaptation from implicit neural signals, our proof-of-concept is a step towards seamless, low-friction personalization in XR.