ORIGINAL RESEARCH article

Front. Comput. Neurosci.

Volume 19 - 2025 | doi: 10.3389/fncom.2025.1569374

This article is part of the Research TopicMachine Learning Integration in Computational Neuroscience: Enhancing Neural Data Decoding and PredictionView all 4 articles

Reinforced Liquid State Machines -New Training Strategies for Spiking Neural Networks based on Reinforcements

Provisionally accepted
  • 1Leipzig University, Leipzig, Germany
  • 2Center for Scalable Data Analytics and Artificial Intelligence, Leipzig, Germany
  • 3School of Embedded Composite Artificial Intelligence, Leipzig, Germany

The final, formatted version of the article will be published soon.

Feedback and reinforcement signals in the brain act as natures sophisticated teaching tools, guiding neural circuits to self-organization, adaptation, and the encoding of complex patterns.This study investigates the impact of two feedback mechanisms within a deep liquid state machine architecture designed for spiking neural networks. The new architecture integrates liquid layers, a winner-takes-all mechanism, a linear readout layer, and a reward-based reinforcement system to enhance learning efficacy. While traditional Liquid State Machines often employ unsupervised approaches, we introduce strict feedback to improve network performance by not only reinforcing correct predictions but also penalizing wrong ones. Specifically, we compare strict feedback to another feedback-strategy, known as forgiving feedback, that is not including punishment.Experimental results demonstrate that both feedback mechanisms significantly outperform the baseline unsupervised approach, achieving superior accuracy and adaptability in response to dynamic input patterns. This comparative analysis highlights the potential of feedback integration in deepened Liquid State Machines, offering insights into optimizing spiking neural networks through reinforcement-driven architectures.

Keywords: spiking neural networks, Bio-inspired learning, Reinforced-Spike-Timing-Dependent Plasticity (R-STDP), speech recognition, Adaptive Neural Architectures, neuromorphic computing, temporal learning

Received: 31 Jan 2025; Accepted: 30 Apr 2025.

Copyright: © 2025 Krenzer and Bogdan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Dominik Krenzer, Leipzig University, Leipzig, Germany

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.