AUTHOR=Kim Daehyun , Chakraborty Biswadeep , She Xueyuan , Lee Edward , Kang Beomseok , Mukhopadhyay Saibal TITLE=MONETA: A Processing-In-Memory-Based Hardware Platform for the Hybrid Convolutional Spiking Neural Network With Online Learning JOURNAL=Frontiers in Neuroscience VOLUME=Volume 16 - 2022 YEAR=2022 URL=https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2022.775457 DOI=10.3389/fnins.2022.775457 ISSN=1662-453X ABSTRACT=We present a processing-in-memory (PIM)-based hardware platform, referred to as MONETA, for on-chip acceleration of inference and learning in hybrid convolutional spiking neural network. MONETA uses 8T SRAM-based PIM cores for vector matrix multiplication (VMM) augmented with Spike-Time-Dependent-Plasticity (STDP) based weight update. The SNN-focused data flow is presented to minimize data movement in MONETA while ensuring learning accuracy. MONETA supports on-line and on-chip training on PIM architecture. The STDP-trained ConvSNN with the proposed data flow, 4-bit input precision and 8-bit weight precision shows only 1.63 % lower accuracy in CIFAR-10 compared to the STDP accuracy implemented by the software. Further, the proposed architecture is used to accelerate a hybrid SNN architecture that couples off-chip supervised (back propagation through time) and on-chip unsupervised (STDP) training. We also evaluate the hybrid network architecture with the proposed data flow. The accuracy of this hybrid network is 11.58% higher than STDP trained accuracy result and 1.40 % higher comparing to the backpropaged training based ConvSNN result. Physical design of MONETA in 65nm CMOS shows 18.69 TOPS/W, 7.25 TOPS/W and 10.41 TOPS/W power efficiencies for the inference mode, learning mode and hybrid learning mode, respectively.