Event Abstract

NeuroFlow: FPGA-based Spiking Neural Network Acceleration with High-level Language Support

  • 1 Imperial College London, Department of Computing, United Kingdom
  • 2 Imperial College London, Department of Bioengineering, United Kingdom

Spiking neural networks are a useful tool for cortical modelling and robotic control, but simulating a large network in real-time requires high-performance computers or specially built accelerators. Traditional accelerators for large-scale spiking neural network accelerators developed previously use Graphics Processing Units (GPUs) or Application-Specific Integrated Circuit (ASIC) chips. While ASICs deliver high performance, they lack the flexibility to reconfigure and hence are unable to adapt variation in the design and models employed. On the other hand, GPUs provides a decent speedup over multi-core CPUs and good flexibility, but it lacks scalability to handle larger networks.

In this work we present NeuroFlow, a Field Programmable Gate Array (FPGA)-based spiking neural network accelerator consisting of 4 FPGAs. It supports the use of PyNN, a high-level simulator-independent network description language, to configure the hardware. A major novelty of the system is the capability to provide custom hardware configuration based on various simulation requirements, such as precision and time delay. The accelerator is implemented on an off-the-shelf MPC-C500 from Maxeler Technology which employs a streaming architecture in the FPGAs. The accelerator achieves the performance gain primarily by parallelizing the computation of point-neuron models and employing low-level optimization for synaptic data memory access. The accelerator currently supports basic PyNN functions such as spike-timing-dependent plasticity (STDP) and arbitrary postsynaptic current kernels.

The system is able to support simulation of network of approximately 800,000 neurons, and achieve a real-time performance of 400,000 neurons for a network firing at 8Hz with random connections. With a single FPGA running at 150MHz, the accelerator delivers a throughput of 1.9 times to 3.5 times the performance of one of the most recent GPU-based accelerators in terms of postsynaptic potential delivery rate (Fidjeland et al., Neuroinformatics, 2012 Dec), subject to the simulated network and the GPU model used.

In conclusion, while harnessing low-level customization and fine grained parallelism in FPGA, NeuroFlow is also able to provide the flexibility of a high-level platform such as GPUs and high-performance computers. It provides a promising alternative to accelerate spiking neural network simulations.

Figure 1
Figure 2

Acknowledgements

The research leading to these results has received funding from European Union Seventh Framework Programme under grant agreement number 287804, 248976 and 257906. The support by the Croucher Foundation, UK EPSRC, HiPEAC NoE, Maxeler University Program, and Xilinx is gratefully acknowledged.

Keywords: FPGA, Spiking Neural network, PyNN, Hardware Acceleration, Dataflow Computing

Conference: Neuroinformatics 2013, Stockholm, Sweden, 27 Aug - 29 Aug, 2013.

Presentation Type: Oral presentation

Topic: Neuromorphic engineering

Citation: Cheung K, Schultz SR and Luk W (2013). NeuroFlow: FPGA-based Spiking Neural Network Acceleration with High-level Language Support. Front. Neuroinform. Conference Abstract: Neuroinformatics 2013. doi: 10.3389/conf.fninf.2013.09.00095

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 08 Apr 2013; Published Online: 11 Jul 2013.

* Correspondence: Mr. Kit Cheung, Imperial College London, Department of Computing, London, United Kingdom, k.cheung11@imperial.ac.uk