AUTHOR=Bitar Ammar , Rosales Rafael , Paulitsch Michael TITLE=Gradient-based feature-attribution explainability methods for spiking neural networks JOURNAL=Frontiers in Neuroscience VOLUME=Volume 17 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2023.1153999 DOI=10.3389/fnins.2023.1153999 ISSN=1662-453X ABSTRACT=Spiking neural networks (SNNs) are a model of computation imitating biological neurons that operate more sparsely than artificial neural networks (ANNs) by processing event data (spikes).This allows SNNs to achieve ultra-low latency and small power consumption. In this paper, we adapt and quantitatively evaluate, for the first time, gradient-based explainability methods originally developed for conventional ANNs to create input feature attribution maps for SNNs trained through back propagation that process either: 1) event-based spiking data or 2) realvalued data. The adapted methods address the following known limitations of existing work on explainability methods for SNNs: a) poor scalability, b) limited to convolutional layers, c) require the training of another model, and/or d) provide maps of activation values instead of true attribution scores. We evaluate the adapted methods on classification tasks (for both real-valued and spiking data) and present quantitative results that confirm the accuracy of the proposed methods in identifying the most important input features through perturbation experiments at the pixel-and spike-levels. Our results reveal that gradient-based SNN attribution methods, in contrast to model-specific related work, successfully identify highly contributing pixels and spikes with significantly less computation time than model-agnostic methods and that the chosen coding technique has a noticeable effect on the input features that will be most significant.