Impact Factor 3.074

The world's most-cited Neurosciences journals

Review ARTICLE Provisionally accepted The full-text will be published soon. Notify me

Front. Neuroinform. | doi: 10.3389/fninf.2018.00068

Code generation in computational neuroscience: a review of tools and techniques

  • 1Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Forschungszentrum Jülich, Helmholtz-Gemeinschaft Deutscher Forschungszentren (HZ), Germany
  • 2INSERM, CNRS, Institut de la Vision, Université Pierre et Marie Curie, France
  • 3Department of Psychology, Cornell University, United States
  • 4Monash Biomedical Imaging, Faculty of Science, Monash University, Australia
  • 5Department of Automatic Control and Systems Engineering, University of Sheffield, United Kingdom
  • 6Department of Computer Science, University of Sheffield, United Kingdom
  • 7Unité de Neurosciences, Information et Complexité, Centre national de la recherche scientifique (CNRS), France
  • 8Simulation Lab Neuroscience, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich, Helmholtz-Gemeinschaft Deutscher Forschungszentren (HZ), Germany
  • 9Department of Neuroscience, Physiology and Physiology, University College London, United Kingdom
  • 10Imperial College London, United Kingdom
  • 11Yale University, United States
  • 12University of Manchester, United Kingdom
  • 13Blue Brain Project, Campus Biotech, Institut Fédéral Suisse de Technologie, Switzerland
  • 14Departamento de Física, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo Ribeirão Preto, Brazil
  • 15Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-Universität Bochum, Germany
  • 16Kirchhoff-Institute for Physics, Universität Heidelberg, Germany
  • 17Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, United Kingdom
  • 18Software Engineering, Jülich Aachen Research Alliance (JARA), RWTH Aachen Universität, Germany
  • 19Institut de Neurosciences des Systèmes, Aix-Marseille Université, France

Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code.

Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages.

In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them.

Keywords: Code generation, simulation, neuronal networks, Domain specific language, modeling language

Received: 15 Mar 2018; Accepted: 12 Sep 2018.

Edited by:

Arjen Van Ooyen, VU University Amsterdam, Netherlands

Reviewed by:

Astrid A. Prinz, Emory University, United States
Michael Schmuker, University of Hertfordshire, United Kingdom
Richard C. Gerkin, Arizona State University, United States  

Copyright: © 2018 Blundell, Brette, Cleland, Close, Coca, Davison, Diaz Pier, Fernandez Musoles, Gleeson, Goodman, Hines, Hopkins, Kumbhar, Lester, Marin, Morrison, Müller, Nowotny, Peyser, Plotnikov, Richmond, Rowley, Rumpe, Stimberg, Stokes, Tomkins, Trensch, Woodman and Eppler. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Dr. Jochen M. Eppler, Forschungszentrum Jülich, Helmholtz-Gemeinschaft Deutscher Forschungszentren (HZ), Simulation Lab Neuroscience, Institute for Advanced Simulation, JARA, Jülich, Germany, j.eppler@fz-juelich.de