Impact Factor 3.877

The world's most-cited Neurosciences journals

This article is part of the Research Topic

Deep Learning in Neuromorphic Systems

Original Research ARTICLE Provisionally accepted The full-text will be published soon. Notify me

Front. Neurosci. | doi: 10.3389/fnins.2018.00840

Memory-efficient Deep Learning on a SpiNNaker 2 prototype

  • 1Technische Universität Dresden, Germany
  • 2Graz University of Technology, Austria
  • 3Georg-August-Universität Göttingen, Germany
  • 4University of Manchester, United Kingdom

The memory requirement of deep learning algorithms is considered incompatible with the
memory restriction of energy-efficient hardware. A low memory footprint can be achieved by
pruning obsolete connections or reducing the precision of connection strengths after the network
has been trained. Yet, these techniques are not applicable to the case when neural networks
have to be trained directly on hardware due to the hard memory constraints. Deep Rewiring
(DEEP R) is a training algorithm which continuously rewires the network while preserving very
sparse connectivity all along the training procedure. We apply DEEP R to a deep neural network
implementation on a prototype chip of the 2nd generation SpiNNaker system. The local memory of
a single core on this chip is limited to 64 KB and the standard LeNet-300-100 network architecture
is trained entirely within this constraint without the use of external memory. Throughout training,
the proportion of active connections is limited to 1.3%. On the handwritten digits dataset MNIST,
this extremely sparse network achieves 96.6% classification accuracy at convergence. Utilizing
the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of
computation time, per-core memory consumption, and energy constraints. When compared to a
standard CPU implementation, neural network training on the SpiNNaker 2 prototype improves
power and energy consumption by two orders of magnitude.

Keywords: Deep Rewiring, pruning, sparsity, SpiNNaker, memory footprint, parallelism, Energy Efficient Hardware

Received: 29 Jul 2018; Accepted: 29 Oct 2018.

Edited by:

André Van Schaik, Western Sydney University, Australia

Reviewed by:

Terrence C. Stewart, University of Waterloo, Canada
Mark D. McDonnell, University of South Australia, Australia
Hesham Mostafa, University of California, San Diego, United States  

Copyright: © 2018 Liu, Bellec, Vogginger, Kappel, Partzsch, neumärker, Höppner, Maass, Furber, Legenstein and Mayr. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: PhD. Chen Liu, Technische Universität Dresden, Dresden, Germany, chen.liu@tu-dresden.de